 In 1948, Claude Shannon identified the fundamental concept of information theory. Information is an answer you don't already know. So how much information does a graph contain? Suppose you have a graph. The graph represents relationships between objects. So I can answer questions like, are you and V related? How many other objects is you related to? What happens to the relationships if V is removed, and so on? Suppose we want to use our graph to answer a question. If our question has k possible answers, where answer A i occurs with frequency p i, then the information content of the graph can be found using the Shannon entropy, which is the sum of the frequencies times the logs to base 2 of the frequencies. For example, let's consider the complete graph k n, and let's determine how much information is contained by the degrees of the vertices. So the question, what is the degree of the vertex, will only have one answer, n minus 1. And this will be the answer 100% of the time. So the Shannon entropy will be, so the graph's degrees actually contain zero bits of information. Now this might seem a little peculiar, so let's explain why the result is unsurprising. Remember, information is an answer you don't already know, and because the graph is k n, we already knew the degree of any vertex. So this isn't surprising, because we know in advance the degree of any vertex is n minus 1. An important idea to keep in mind, the Shannon entropy depends on the information we expect to get. For example, if our channel outputs a letter, the information we expect to get is the letter and not the shoe size of the center. A graph can convey a great deal of information, so we need to focus on one type of information at a time. So let's say we have a graph like this, so if our question is what is the degree of a vertex, we find the possible answers are, and their frequencies are, using these values we can compute the graph entropy. The answer 1 occurs with frequency 1, 6. The answer 2 occurs with frequency 1, 6. The answer is 3 occurs with frequency 3, 6. And 4 occurs with frequency 1, 6. And so the entropy will be, so our graph contains about 1.797 bits of information about the degree of the vertices. Now not all information is equally useful. For example, we might want to use our graph to decide how the different nodes influence each other. One possibility is to base the influence on the degree of the node, with the total influence based on the degrees of all nodes. So imagine each node in a graph to be a source of some quantity which we'll call inspiration. Then a question might be, which node produces inspiration? What we'll need is a model for a frequency that a node produces inspiration. At this point, we'll need to invoke some creativity. And so here's everything I can teach you about being creative. While no one can teach creativity, it's possible to look for patterns. If we want to model the probability a node is inspired, we might note that in the real world, inspiration often comes from connections to other people or ideas. For example, watching YouTube videos. So the frequency that a node creates inspiration might have something to do with the degree of the node. This suggests that we might model the probability a node creates inspiration as where dk is the degree of the vertex k and alpha is some constant. This gives us another way to measure the informational entropy of a graph. We'll refer to this as the alpha degree graph entropy. While we can choose alpha to be anything, there are some obvious choices. For example, alpha equal to 1 suggests that inspiration is proportional to connections. On the other hand, if we imagine inspiration as coming from a cross-fertilization of ideas, and if a vertex has di neighbors, there are di choose 2 or roughly di squared pairs. So alpha equals 2 might be more interesting. So for example, let's find the 2 degree graph entropy. We have alpha equals 2. So the frequency associated with a node with degree dk will be... Now our nodes have degrees 4, 3, 3, 3, 2, and 1. And so the sum of the squares of the degrees will be... And we might view this as the total inspiration produced by all nodes. The node of degree 4 will have 4 squared 48 of this inspiration, which we can regard as the frequency it will produce it. So including that in our graph entropy. Likewise, the nodes with degree 3 will each have 3 squared 48 of this inspiration. So we'll include those quantities. Likewise, for the nodes of degree 2 and 1. And so we compute. And this gives us a graph entropy of about 2.3. The study of graph entropy is actually very recent, so it's not known how important it is. But a 2023 study by Swarov-Mondal and Ginkai Chandra Das showed a strong correlation between the 2 degree entropy of graphs that represent molecular structures and several physical properties. So graph entropy metrics, there are many others, may be able to give us insights into anything that can be represented as a graph, which includes almost everything.