 Thank you, Andres. First of all, thank you for inviting me. It's a great pleasure of me to be here. And today I will be talking about a topic that I've been working on lately in the last few years, mainly with Sebastiano Vina and sometimes with other people. So if you already know about centrality and the history of centrality, you may take a nap for the first part of my talk, but if you are not so familiar with the topic, then I think I will provide a sort of a bird-eye view on the subject and above all on the history of this topic. And then in the very last part I will focus on a specific measure of centrality, Page Rank. I have chosen Page Rank because this conference is about the Google metrics, so it's natural to take this as an example. And I will focus on a specific axiom that is rank monotonistic, but this will happen only in the last part of my talk. And if I will have enough time, I also want to show you a sketch of the proof. More or less the content of my talk will be as follows. First of all, I will start with a sort of a plea of centrality. I will try to convince you that centrality is very ubiquitous, very interesting, very simple, but very interesting, and it pops up in many different situations. And doing this I will mention the fact that by now we have literally hundreds of centrality indices. And of course I don't have time to present them all, but I will try to provide you a sort of a guide to try to classify. I will have a sort of a taxonomic approach in this talk, so I will try to allow you for a classification of centrality indices. And of course I will single out some of them. In this talk I will concentrate on six of them. Two for each of the three families that I will define. Then I will also try to convince you that a good way to study centrality measures is using axioms. This is not the only way, but it's a very nice one. And many people have tried or are trying to use this approach to study centrality, and by now we also have a jungle of axioms. So once more I will try to give you a guide and a taxonomy of the possible axioms. This is quite new. This is something we are working on these days. And I will focus on some important examples of axioms. And then in the very last part of the talk I will consider a rank monotonicity for page rank, and for this only case I will provide a sketch of a proof. Let me start with a very, very general scenario that I have in my mind and that explains why we as computer scientists come to like the idea of centrality. I described this as a, I said information retrieval system, but in fact it's a much more general thing. What I have in my mind is that I have a document repertoire, whatever it is. So it may be the set of pages retrieved by Google, or it may be the set of items that are on sale on Amazon, or it may be the members of a social network like Facebook. It can be many, many different things. I will call in each of these examples, I will call the items or the objects there, documents, and I write the D for the set of documents. You should think D is finite, but very, very large. And then there is a set of queries. Now the set of queries is typically infinite. You in most cases have a query language to write the queries, but don't think that the queries are always explicit. Is it? What should I do? So in some cases the queries are really explicit, right? When you type a query on a Google search engine, that thing is explicit. So it's an actual user typing an actual query using an actual query language. But in fact part of the query is implicit, because the query that arrives at Google is probably enriched with information that you don't explicitly type, like your location, whether you are typing on a mobile device or not, etc. So this is an example where the query is partly implicit and partly explicit. But in some cases the query is totally implicit. For example, in product recommendation on Amazon, say, then you are going to be recommended a product based on your action. There is no explicit query there. There is your last purchase or the fact that you have been looking for some specific thing on Amazon and so on. All of these I would call query. And then there is the system. So the system takes a query in and what it should do is selecting the appropriate documents, whatever they are. So for the search engine it should select the appropriate pages for your query, but for product recommendation it should select the product that you should be recommended, etc. And typically this is two things. One is a set of documents that are appropriate and besides for each of them a score. That is a value that measures how much that specific document is appropriate for your query or relevant for your query. And often, more often than not, D is endowed with a graph structure. The simplest example probably is pages retrieved by a search engine because in that case pages are part of a graph that is called a web graph. But in all the other cases you can think of the graph that is underlying D. Now the selection part of this can be in fact simplified. Many people prefer not to think of the selection, but rather to think that you score all the documents. Simply assign a very, very low number for the documents that you wouldn't select. With this simplification, which is not a big one, the whole system can be described simply as a function that for every query and document pair assigns a score. And the larger the score, the more appropriate is the document for the query. Now the second simplification that I will be doing is apparently more crazy because now I am assuming that there are no queries. This seems absurd, but in fact there are two good justifications for this simplification. The first one is that in some cases you are not interested in how relevant a document is for a query, but rather how important a document is overall. So you are interested in importance and not in relevance. The second justification for this simplification that I am going to apply is that in some cases you can take the query and modify the document repertoire so to reflect the query you have in your hands. And then after this you can throw away the query. In both cases the query is not there anymore and the system is now simplified as something that assigns a score to every document. The query is not there anymore. And the third simplification is that this scoring happens based only on the linkage structure. So we forget about the rest of information, only look at the graph structure and assign a score to every document. And this is what people call centrality. Centrality is a way to assign a measure of importance to every vertex in a graph. This is called centrality measure. Always think that the notes of our graph are the documents now. Keep in mind that this is called centrality index or centrality measure or centrality score. I prefer not to use the rank, the word rank if I can, because rank is something else. You use scores to assign a rank to the documents, right? You can sort the documents by decreasing score. This is what people do. But the rank is something different from the score. At the end of my talk this will be super clear. It's unfortunate that page rank is called page rank. The correct name should be page score because it assigns a score to every page, not a rank. But nonetheless, I will try to always refrain to using the word rank in this context. Now, where centrality was born, where the word centrality comes from, in fact the first usage of the word centrality comes from social sciences. And it was used for the first time by a group of sociologists working at MIT at the end of the 40s. This group was led by Alex Bavelas. And from this very seminal work there was a stream of works that started back then in the 50s and is still going on. Centrality is used a lot by social scientists. Psychometrists, sociologists, economists, they all use different kinds of indices for centrality. And here I wrote that it was brought to computer science through information retrieval. This happened sometimes with the advent of search engines. But this is sort of an up posterior reconstruction because as computer scientists we prefer to ignore the decades of history of centrality that preceded us. In fact, if you read, for example, the page rank paper, they make no mention of the fact that they are developing a centrality measure in fact. It is only recently that we are looking back to the past and see that many things that we as computer scientists are doing have been done much earlier, either by social scientists or by mathematicians. I will try to give you some pointers later. So as I said that it has a key role and by now there are so many centrality indices that it is sometimes difficult to have a guidance. Now the way I prefer to look at indices is by clustering them into three families. Remember that the overall purpose is to assign a score to every node in a graph that measures how important that node is. The first family is what I call path-based indices. Path-based indices are based on the number of paths or sometimes shortest paths passing through or arriving at the vertex you are interested in. Two examples in this family are very famous, they are called betweenness and cat's index. I will give you the exact definitions in a second. The second family that we like much more than the first one as computer scientist is the spectral family. It's a very general thing but we can say that it's based on some linear algebra construction or on some eigenvector that is defined on a matrix that is derived from the adjacency matrix of the graph. Page rank is a good example, you all know about it. Another example is a series index but there are many more. And finally there is another family that I call geometric indices. Geometric indices are based on distances, distances from a vertex to the other vertices. Probably the best example is closeness, centrality, harmonic is another example. The six names of centrality indices that you see here will be defined in a while in my talk. I should say that these three families are not completely distinct and in fact the first two families have a large overlap, many times the same index can be defined using paths or using some metrics with a spectral construction. Let me give you the examples that I was mentioning. So I started with path-based centralities and as I said the path-based centrality depends on the paths entering or passing through a node and without further ado let me define first betweenness. Betweenness is very popular among social scientists. If you talk with people from social science they say that many of them think that it's the only measure of centrality. It was defined by Antonis in the early 70s and it goes like this. The betweenness of a node X is defined with this summation so you take every possible pair of vertices, YZ and you look at all the shortest paths connecting them and count how many of those shortest paths pass through X. You get that ratio there and you sum over all possible YZ. If you want it is proportional to the probability that if you select the shortest path that path passes through X. In my view this is not a measure of importance but it's rather a measure of robustness but still people use it many many times with the intention of measuring importance of nodes. Sometimes robustness is a measure of importance, it depends on your problem. Cat centrality was defined much earlier. It can be defined in many different ways but probably the simplest one is this one. First of all cat centrality is a parametric centrality measure so you have to decide a parameter that is called alpha in this formula. What you do is you count all the paths ending at X of length T for every possible T. Here I mean all the paths not elementary paths and this number is discounted exponentially by this factor here. So essentially a node is important if there are many short paths arriving at the node or many many long paths because long paths are counted less because there is this decaying factor. Starting from where? Wherever. They should end in X but they start whenever. You count all the paths. This is cat centrality. By the way many of these measures were defined originally on undirected graphs. I am in fact defining them for directed graphs and in these cases there is always the problem of whether considering outgoing paths or incoming paths. And I always use what is usually called the negative definition which means I consider incoming paths because in most situations you are more interested in incoming paths than outgoing paths and the importance is endogenous. It is given by people that is pointing to you. So these are the two definitions of the path-based tribe and let me move to spectrocentrality. Spectrocentrality in a way has a longer history and the first probably simplest definition of spectrocentrality is just take the left or right dominant eigenvector of the adjacent symmetric. This is called eigenvector centrality. It is not very used for many reasons but it is simple and sometimes it is considered. In fact we could trace back its usage from the works by Landau at the end of the 19th century. Landau was interested in evaluating the importance of chess players based on chess tournaments. He was manipulating matrices that contain 0, 1 and 1, half. 0 means a defeat, 1 means that x won over y and 1, half was used for a draw. Landau observes that if you take m times the all-one vector you get a reasonable measure because you are simply summing 1 for every win, 1, half for every draw and 0 for every defeat. But it is even better to consider the powers of m and at the end of a long discussion he says that what you actually need is solving this eigenvector equation. I think this was the first time, as far as we know, that somebody is trying to measure importance based on an eigenvector. This idea was brought to social sciences by Birch at the end of the 50s. I don't think he was aware of the work by Landau. I don't think he quoted Landau. Now, C. Lee, at the end of the 40s, proposed a similar idea to measure children's popularity. He had this group of children and he wanted to measure who was the most popular. He came out with this recursive formula, the popularity of x, of a guy x, is a summation of the popularity of the guy's y that like x. Here the arrow of the graph means y likes x. But there is a small variant with respect to eigenvector centrality that is the popularity of a guy is divided by the number of people he likes. This is the out-degree of y in the graph. This is the difference between eigenvector centrality and C. Lee's index. eigenvector centrality is the left dominant eigenvector of the adjacency matrix, whereas C. Lee's index is the left dominant eigenvector of g after raw normalization. I write g r for this. And there are two different indices. Let me come to page rank, which is something you all know. It was defined in 1998. The idea is that you can define it in many different ways. It's the left dominant eigenvector of this guy here. You recognize two things. You recognize that there is the raw normalized version of the matrix, like in C. Lee, but there is also this decaying factor. If you write this as a summation, you will see easily its similarity with cuts. In fact, cuts can be defined almost in the same way, except that in cuts you don't have the raw normalization. You can think that page rank is the son of two parents. One is C. Lee and one is cuts. Like in C. Lee, you got raw normalization, and like in cuts, you got the dampen factor. This is the recursive definition of page rank, but you all know about it. Or there should be a y here. I will use the definition later when I will talk about page rank. Remember that I'm using the slightly generalized definition with a preference vector v. In the original one in the paper by Brennan Page, v is the uniform vector. It doesn't change much. Let me pass to the last family, geometric centralities. Geometric centralities depend on distances. You can precisely define what I mean by using what I call the distance count function. The distance count function for the node x at distance t is the number of nodes z that are at distance t to x. If you let t range over all the possible positive integer, you get what I call the distance count vector. Of course, it's ultimately zero. After n, it will certainly be zero. This distance count vector is in one-to-one correspondence with something that is probably better known. It's called the neighborhood function. The neighborhood function is defined exactly in the same way, but with a less than or equal to here. Geometric centralities are the functions, the centralities, that are a function of the distance count vector. What I mean is that if two nodes have the same distance count vector, they will have the same centrality. For example, a trivially in-degree is geometric centrality because it's just the projection of this vector on its first component. The first component of this vector is the in-degree, which is by itself a measure of centrality, and it's the geometric centrality. Two more famous families of geometric centrality are closeness. Closeness is usually defined as the average distance from a node to all the other nodes in the network, but in fact, you want to take the reciprocal because you want important nodes to have large centrality as usual, so you take just the reciprocal of this summation here. You must be careful because closeness was defined by Bavellas. It was actually the first centrality measure that was defined in social sciences, but Bavellas had in his mind undirected and connected graphs. If the graph is directed and possibly non-strongly connected, you must be careful in this summation, and what you do is you take this summation only in the same connected component as x, or if you prefer, you take it only for the y's that are at finite distance to x. This causes a number of problems, a phenomenon that is called big in Japan effect. That is, the people that are living in a small community tend to be considered more important than they should. There are some ways to adjust this. One possible way is considering instead what is called harmonic centrality. The difference between the two formulas is very small, but at least here you can consider the summation over all possible y's with the proviso that when this guy here is infinite, you just sum zero. The idea of harmonic centrality was inspired by works by Markiore and Latoura of the early 2000s, but can be dated back to the 50s. So, with all these definitions in mind, remember that the six definitions that I gave are just the tip of the iceberg. There are so many possible centrality measures, but one problem that one often tries to solve is which centrality measure is better or how does this centrality measure compare to this other one? The study of centrality measures can be done either individually. You can focus on a specific measure and study it at depth, like PageRank and study how PageRank depends on the graph structure, on the parameters. The parameters in the case of PageRank are the damping vector and the preference vector, blah, blah, blah. So you can study a measure in depth individually or you can try to compare different measures and you can do both things in two different ways. One is by using some external source of ground truth if you have one. Or you can assume an axiomatic approach. You state a property that is desirable or undesirable for centrality and then you check if which of the measures do satisfy that property. I think that the axiomatic approach is nice because at the end of the day it allows you not to decide which measure is more important but rather how measures are related to each other. Much like the T axioms in topology that allow you to classify topological spaces. So axioms for centrality are not a new thing. You can trace the study of these axioms back to Sabidusi in the 60s. Many people are trying to do this. They are trying to define axioms that are suitable and then trying to see which measure satisfy them. Sometimes in a comparative way but sometimes aimed at specific indices. For example here I'm quoting two works where they only study axioms for pitch rank. Sometimes they even have like in this work they even have a sort of a hard core approach. They want to define a set of axioms that is necessary and sufficient for a given index. By the way they succeed in this case but only for a sort of degenerate version of pitch rank without alpha. Now once more I think that the axioms are so many that it is wise to try to classify them and I am trying to do that using three classes that are interesting. One I will call invariance properties then I will talk about score dominance properties then I will talk about rank dominance properties and I should say that there are also many other axioms that don't fall in these categories but the first three categories are interesting by themselves. Let me start precisely with invariance properties and first I want to give you a flavor of how those properties are usually stated. Usually they are stated like this. You have two graphs g and g prime and you have two notes one leaving in g and the other one leaving in g prime and these two things satisfy a number of constraints and under these constraints you want that the score of x in g is equal to the score of x prime in g prime. Now depending on the constraints the actual constraints you get different invariance properties. Let me start with a very simple one that is what I call invariance by isomorphism. This one is the following. g and g prime are two graphs that are isomorphic and f is the graph isomorphism. You take a node in g and it's a mage in g prime and you want the two notes to have the same score. Now this property is so obvious that in fact it is given for granted. When we say centrality we mean a measure that satisfies this action, right? Because we want centrality to depend only on the graph structure not on the name of the nodes. So when we define centrality we should always state it using this property which is fundamental even if usually people don't do that. So this is a very nice invariance property and all the centrality measures that I define and that are defined in the literature do satisfy this. You can do one step further and consider another kind of invariance which I call invariance by neighbors. Invariance by neighbors in this case you have just one graph g and two nodes living in the same graph and these two nodes have the same in neighbors and the same out neighbors. Under this condition you want x and x prime to have the same score. Otherwise said if you take two twin nodes that have exactly the same interface to the graph by interface I mean incoming edges and outgoing incoming arcs and outgoing arcs then you want them to have the same score. Is it satisfied by the measures I defined? Well yes, the answer is yes. In fact it is easy to verify that all geometric centralities do satisfy this property and the same happens for spectrocentralities and you can take this one step further, right? Because all the measures that I mentioned so far only depend on incoming links. So you may think that this property this invariant property can be strengthened by what I call invariance by in neighbors. Invariance by in neighbors simply means that if you have two nodes in a graph and they have the same in neighbors they should have the same score. Is it true for the measures I defined or isn't it? What do you guess? Well at the beginning when I saw this when I thought about these axioms I thought that all the measures that I defined in fact satisfied it but it's wrong, it's not true and I give you an example of why it's not true. I will show that geometric centralities in general cannot satisfy this kind of invariance. Take these two guys. So x and x prime are two guys living in the same graph they have exactly the same set of in neighbors but they may have different out neighbors. So the in neighbors are the same but the out neighbors can be different. Now take any other guy, z and consider a shortest path let's say from z to x. Now however this shortest path is done at the very last step it will contain an arc from one of the in neighbors of x to x and you can slightly modify this path to obtain a path which is exactly the same but going to x prime, right? So what I said is that a shortest path from z to x can be turned into a shortest path from z to x prime and the other way around. This means that the distance of the guys from x is the same as the distance of every guy to x prime. But there is this important difference I am considering a guy which is not x or x prime because x and x prime cannot be considered here the distance from x to x prime and the distance from x prime to x can be completely different and the result is that if you consider their distance count vectors they are almost the same but not exactly the same. There will be one single unit that is moving. For example here the single unit is moving from here to here because in this example the distance from x prime to x is 4 whereas the distance from x to x prime is 8 and since these two vectors are different in general you cannot assume that they have the same centrality and in fact we have counter examples for all the families of geometric centralities. This is a small difference but it's enough to make the axiom thing. On the other hand if you consider symmetric graphs this does not happen so they do satisfy invariance by enablers and by the way spectral centralities all satisfy invariance by enablers. So far I've considered invariance properties now I move to score dominance properties. Score dominance properties go like this you have two graphs, you have two notes some constraints and based on these constraints the centrality score of x and g is at least as large as the centrality score of x prime and g prime. Sometimes you require a strict larger, in this case I talk about strict dominance. One example of dominance that was considered recently in this wonderful work by Schock and Brandes of 2016 this by the way this is a very nice piece of mathematics they try to define a large family of centrality measures based on some semi-group construction but at some point they consider this dominance that I call dominance by enablers it goes like this you have two notes in a graph and the enablers of x are a subset of the enablers of x prime then you want the score of x to be not larger than the score of x prime. Why should it be like that? Well, because of course if x has a subset of it the enablers of x prime is that x prime is at least as important but we already know that in directed graphs for geometric centrality measures this property cannot be satisfied because if it would be satisfied also invariance by enablers would be satisfied In fact Schock and Brandes studied this property only on undirected networks. Another very famous example of score dominance is score monotonicity Score monotonicity goes like this pay attention because this is the property that I will be focusing on in the rest of my talk so you have a graph that does not contain a specific arc x to y you add this arc and what you want is that this addition will increase the score of y This is in fact the property that was considered by Katia in her talk two days ago she used to call it sensitivity So this property was studied a lot for many different indices and this is the result for the indices we are looking at You see that there is no apparent pattern There are some spectral measures like Silly that don't satisfy it PageRank does satisfy it and in a sense Katia gave it for granted during her talk If you add an arc to a node the score of that recipient node will always increase Betweenness does not satisfy it Cats does etc The situation is different on strongly connected graphs sometimes Let me just show you why on closeness and PageRank you have this situation First let me start with PageRank The score monotonicity for PageRank was proved back by Chen and others in this work of 2003 They in fact had to assume that all nodes have non-zero PageRank for their proof to work We generalized it under a much weaker assumption We only assumed that the source node has a positive score For closeness the fact that closeness in general does not satisfy score monotonicity can be shown on a very small example This is the example Take this graph here Now the centrality of Y is 1 There is only one node that can reach Y and it is at distance 1 But if you add the arc from X to Y now you have two guys at distance 1 and the overall centrality of Y becomes 1 half So in this example you add one arc to Y and you decrease the centrality of Y instead of increasing it On strongly connected graphs though closeness centrality does satisfy the axiom and the reason is very simple This is the definition of closeness In a strongly connected graph this summation ranges over all the possible Ys So after you add the arc you have exactly the same set of Ys over which the summation is done But adding an arc will in general decrease or not increase all the distances and there is at least one distance that gets decreased that is the distance X to Y So this summation strictly decreases which means that the closeness strictly increases Now score dominance properties are interesting but in fact they are not so interesting because as we saw in the talk of Maria two days ago it may be the case that you add this arc from X to Y and we know that Y increases its score but this doesn't tell us much because maybe there are other guys that increase their score as well and maybe they become more important than they used to be with respect to Y That's why people usually prefer to consider rank dominance properties Let me jump the general approach and go directly to the rank to the rank monotonicity definition that is the one that I want to focus on So this is the rank version of the score monotonicity property I discussed a second ago You have a graph, you add one arc X to Y and you want this to happen So if Y used to be better than Z then the addition of the arc the same must be true and if Y was at a tie with Z then after the addition either Y is still at a tie with Z or it is even better This is called rank dominance So you don't want only Y to increase its score but you want to keep its position in the rank And this is the weak version of rank monotonicity This situation requires that all ties are broken in favor of Y Not only you have the same position but you also break all the ties in favor of Y This table tells you what is the situation for rank monotonicity and by comparison here I am putting in parenthesis the score monotonicity properties You see that in all cases rank monotonicity was satisfied also rank monotonicity is satisfied and in many cases it is satisfied in a strict way A star here needs strict So at least for these six measures there is nothing surprising if the score increases also the rank is preserved and I will focus in the few minutes that I have I will focus on rank monotonicity for page rank The story goes like this In the work by Chen and others where they proved that page rank is score monotone they also proved the weak rank monotonicity under the assumption that every score is positive every page rank is positive We did the same more recently but we were able to prove the strict version of rank monotonicity with a much slightly stronger hypothesis that is we had to assume that the preference is everywhere positive This is stronger of course because if the preference vector is everywhere positive then all the scores are positive but we can prove that this condition is in fact necessary we have counter example of graphs that are strongly connected but our preference is zero and that fail to satisfy the strict version of rank monotonicity Now the difference between what Chen and others did in 2004 and what we did is also in the technique that we had to use because in their work if you look back at their work which was by the way very interesting they use the fact that the google matrix is a regular Markov chain Markov chain properties whereas we had to use properties of m matrices which we think are interesting because they are very general in fact what we did can be applied in many different way in too many different measures and we are able to apply our theorem at the same time to page rank and cuts for example so let me give you an idea of what we did we worked in a more general setting of which page rank is a special case and also cuts is a special case that is called dump spectral ranking you start with a non-negative matrix and with a factor that is more than the reciprocal of the spectral radius of m and with a strictly positive vector v and you define the rank to be like this now if you are confused by the generality of this definition think that you get page rank when m is the normalised version of g of the adjacency matrix and you get cuts when m is exactly the adjacency matrix but this is as you can see more general so this is the kind of centrality we are working on and the basic lemma that we use is the following it's an interesting lemma discuss it in detail because I think it can find applications even to other problems it's a lemma about inverses of m matrices so you have this m matrix here and you take its inverse and you define the rank by multiplying c by a preference vector v now consider in this matrix c two columns y and z and compute all the ratios between elements in the y column and elements in the z column of course to do this you must for the moment assume that z does not contain any zero now among all these ratios there is one that involves the diagonal in the column y this ratio here now it turns out that this ratio is the largest assuming that c y z to be non-zero this ratio is the largest of all ratios you don't need to write it as a ratio in fact you can write it this way so the only thing that you are assuming to be non-zero on this column is c y z this is the first property which is interesting and which we use and the second property as a consequence of this if you take the rank of y you well this guy can be multiplied by q and you get that r y is less than or equal to r z and as a consequence if r z is instead less than or equal to r y then q must be larger than one this is the property we this is the key property that we will use in the theorem so the theorem goes like this remember that we are looking at this guy here and 4-page rank m is the normalised version now let me observe that if you add an arc and look at the difference between the new matrix and the old matrix what you get is something like this all the rows that are not the x-row are zeros and the only difference is in the x-row and the x-row contains some negative entries that are the old out-neighbours of x and exactly one positive entry that is the in position y I call this vector delta and what we do is observe that the difference between the rank after the addition of the arc and the rank before the addition of the arc can be written in this way using Sherman-Morrison's formula kappa is a positive constant I will get rid of it in the next delta is the vector that I just showed you remember it is all zero or negative and it has exactly one positive entry and then there is this oh sorry and then there is this inverse of an m-metrics and this is where we use the lemma now what we need to prove is that if r y used to be better than r z now after the addition of the arc it is strictly better and in fact what we prove is that the increase in z the increase of score in z is less than the increase of score in y and the increase of score by this formula can be written in this way so what we need to prove is that the z-th element of this guy here is less than the z-th element of this guy here under this condition but the condition well you have to discuss many cases but the only non-trivial one is when the y z element of this matrix is positive the other cases are easy and in this case using the lemma by the fact that r z is less than or equal to r y we know that q is larger than or equal to 1 so if you look at this the y-th element of this guy here is delta times delta y times c y y c is this inverse matrix minus these elements these elements are all negative so I can write them as a minus and take in the absolute value and using the fact that q is larger than or equal to 1 we just collect q from here and get the inequality that we want in fact we prove the weak inequality now to go to the strict version of the inequality we need some more work but I wanted to give you the flavor of the proof because the proof is very different from the one that Chen and others used in their work they used the properties of regular market chains so what is the home message of my talk if there is one that I wanted to convince you and I hope that I did that centrality is important and it is very ubiquitous unfortunately we have so many centrality indices and I think we should make an effort to classify them in families our classification into three families is still very rough I think I think we can do a better job axiomatization is one of the ways in which I'm not claiming that it's the only one or that it's the best one but it's a good one and by now there are so many axioms that also axioms should be in a way taxonomized and this is the very beginning of an attempt to look for a taxonomy of axioms and be careful because sometimes you see a property that appears to be so trivial that is either true or false but sometimes proving that it is actually true or actually false requires a lot of work so in case you are interested in this kind of proofs you should be careful because it's much wider than you think at the first site thanks a lot for your attention thank you very much questions you have examples of our security examples of what examples with concrete data with Google yes so I didn't want to present this in this talk but the axiomatic approach is not the only one that we use in fact it's not the first one at the beginning we looked at actual data and the well web graph we used the main example coming from the web and the comparison between all the indices I don't have here this slide but you can see that some indices behave are pretty much alike like for example page rank is very very similar in practice to in degree page rank is also quite similar to cuts but it's for example very different from closeness and by the way it is based on these observations that we observe that betweenness is an index by itself it doesn't have any relation with any other index so I mean in a nutshell we did and other people did a lot of work on practical graphs and on these indices the problem is that what you come out with is you get these two ranks and then what you have closeness centrality of the old UK pages of a large web graph and then you have page rank centrality you can compare them with the Kendall style let's say to see if they are similar or not to decide that they are not similar then deciding which one is better is a tough job you must have some ground truth to do that you discussed the change of page rank when you add a link so if you have weighted graph and you change the weight can everything conventionally analyze them probably so yes, most probably so so it's a matter of sensitivity every result we used is in fact continuous so I think it's easy to generalize to weighted page rank but in that case it is not really clear of what you want for rank monotonicity that if you increase by epsilon the weight I discussed the variation by epsilon of weight I am pretty sure that for every positive epsilon if you increase the weight by epsilon you get strict dominance but I'm not 100% sure but probably it's like that one question you spoke about centrality in terms of vertices vertices but are there measures of centrality for edges betweenness was originally defined for edges yes of course many people study centrality on arcs or edges depending whether you are considering directed or undirected networks which is also quite interesting you can do it directly or you can do it indirectly looking at how sensitive the network is by the deletion of an edge an edge is important when you delete it you change a lot you disrupt a lot the centrality of the nodes actually I have two questions one is you mentioned you have counter examples if I understand when the damping vector has zeros and then that monotonicity doesn't work does it have an effect on the usability of page rank or is it some degenerate things only it's only the strict version that doesn't hold I don't think it has any practical impact because the weak version will in any case hold as long as all pages have positive rank the other question is are there axioms for stability like adding something will not significantly change the course well you can think of rank monotonicity as a form of stability what you are saying is that the addition of that arc does not change the relation between the target node and the other nodes I'm not sure about the rest of the network I'm not sure whether if you add an arc here you cannot change the relation between two guys that are not involved in the addition probably so this would be another very interesting axiom to study thank you