 Okay, so thanks everybody for making it out to the first seminar of this semester. Today we have Sebi Chauba from the University of Delaware, who will be talking to us about addressing graphs and hypergraphs. Go ahead and take us away. Okay, thank you very much. Thank you all for coming to the talk on Friday afternoon. So I will talk about some recent and not so recent problems regarding addressing graphs and hypergraphs. So the story starts at Bell Labs in the 1970s and this is a quote from an interview with Henry Pollock, in which he described working on this loop switching problem and essentially boils down to our boss John Pierce came to us and I invented this new system and see if you can figure out how to how to make it how to make it work. So this talks that you see there they're talking about this loop switching problem. So what was the loop switching problem this was in so they describe it in in this paper in the bell system technical journal in 1971 and this is a snapshot from from that paper and you can find a you know the description is pretty well pretty good in in that paper you can find a also a description in Mike Tate's master thesis he he read that that paper thoroughly and describe what's going on. So they talk about transferring information in the network and essentially I have here pictures from their paper. The loop switching is about you have these loops these circles and the message goes on one of these loops then it touches another loop so it means it transfers to that to the other loop. And you wanted to they're talking about a way of sending messages around that could find their shortest path from one node to to the other, just by kind of like using the addresses or the labelings of the vertices. So in the situation if you look at this graph in which the vertices are those four circles adjacent if they touch. You can the graph is essentially a cycle C for and you can label the vertices with these words over the alphabet 01 such that the distance in the graph between any two nodes equals the hamming distance between any to any two labels any two addresses. And therefore if you have this situation, even in a larger graph it's, it's, it's very easy to, to send the you know to figure out the shortest path from one node to the other, just by looking at their addresses, you kind of make sure the address decreases the distance between the addresses decreases as you as you go along. So, you know, it works for for this graph for C for but you know they describe other graphs so for example this one, in which you have, you have six nodes. And here the problem is with the with the triangle or with the C five but with the triangle is the easiest to figure out. It's actually, it's impossible to find to to give the vertices of a triangle. You know, I give you here their addresses as words over the alphabet 01 of any length such that the distance between any two, any two vertices equals the hamming distance between their labels. So the solutions that they came up with. And this, you know, I give you here their solution. Introduce a new letter in this, in this situation in their, in their bell system journal paper they denote it by D later on we'll see it is denoted by star, such that so called the now we label the vertices by words of the same length over the alphabet 01 D such that we want now that the distance in the graph between any two vertices to equal the number of positions in which one one, the label of or the address of one vertex is zero and the other one is one. So that D could could be at the same time both the zero and one doesn't contribute to the to this to this measuring this so called distance between the labels. So I call it so called distance between the labels because this is not a metric this one over the alphabet 01 star it's not a metric in general. For example, if you look at the vertices. See what I have the A and B in the graph the distance is to you can go A B and the number of positions where one vertex has zero and the other one is one is exactly to the different of the first two positions that D later on they do not contribute anything. Now, later on they introduce the notation instead of D they have a paper on the same time in 1972. It's called on embedding and graphs in squash cubes and I'll tell you a bit later about what does this problem have to do with squash cubes. In which they, you know they describe this, this labeling and the D is replaced as you see by star over there in the in the first in the first line. So to kind of summarize the problem is the following I give you a graph G. You want to find the function so from the vertex sets we want to label each each vertex by words of the same link, let's say K over the alphabet 01 star, such that the distance in the graph which I noted on the left by DG of XY. The link of the shortest path between X and Y equals a number of positions J where the label of one vertex is zero and the label of the other vertex is one. And the interest to be in in minimizes this case so I didn't I didn't know by N of G the minimum K such that this is possible for I had. I haven't shown that such a K exists I'll do that in a second, but for now this is essentially the problem that that they were studying. And I give you some examples there copied from various places so the first one is the cube, where you can do it with just zero ones. You know that's the three dimensional Cuban which you can address everything with the words of length three and that's best possible. In the triangle that we are talking about a bit earlier, you can label it with the words of length of length to you have the one on the left 1110 on the top and zero star on the right. And again you can check that between any two vertices the distance obviously is one, and the number of positions in which one has a zero and the other one has a one is exactly one. And then a larger graph over there on five vertices in which you know you have this an addressing with words of length of length three. Now, I should point out that, and I will mention it a bit later, but kind of a reformulation of this parameter and of G. It's not known if this determining this parameter is MP hard or not. There's a relation out which I'll mention about of this parameter of G with by click decomposition of certain certain multigraphs. But again, yeah, I, I don't know if this determine this parameter I believe it should be it's MP hard but I haven't seen any proof, nobody has proved a reduction. Okay. So yeah so let me explain the, the equivalence between addressing graphs and by click partitions so by a by click, I mean a complete bipartite sub graph. So, let's say we have we take the distance multi graph of our graph G. So, we have a graph G, and the distance multi graph is going to have the same, the same vertex set, and the multiplicity of an edge x y is going to be the distance in the graph between x and y. So, on the diagonal you're going to have distance zero so you get zero. Whenever you had an edge before the distance between those two vertices is is one. So, that stays as a one. And, you know, if you have larger distances of two three and so on in this distance multi graph you will have larger numbers. So the scalar for the complete graph, the distance multi graph of the complete graph is the same thing as the complete graph. Now, what I want to convince you here I give you a sketch is that the addressing of a graph G. So this representation, this addressing function f from the vertex set of G towards of the same length K over the alphabet zero one star with the property that we mentioned earlier. This is equivalent to a partition of the edge, let's call it set but it's an edge multi set of the of the distance multi graph into complete bipartite subgraphs by clicks. And the partition goes as is described there if I have a by click partition. I denote the parts, the, you know, the apartheid sets or color classes of this of these complete bipartite subgraphs. I denote them by a one be one and so on a kbk if I have such a partition, then I give the vertex x the address fx, which has. a third of length k over the alphabet zero one star, where the position j in that in that word is a zero if x belongs to a j is one if x belongs to be j and it's zero otherwise. So that's the, and one can check that that's going to be a valid addressing and the number of positions. It's going to satisfy that property that the distance between the between two. The distance between the two vertices equals the number of positions j where fx has a fx and f y j are different and one is a zero and one is one. I mean, this doesn't actually, you know, you can do this actually for any matrix. So any non negative integer matrix you can, you can play kind of a little bit this. This game and I think it's it's way to describe something like this is generally described for in a paper by Fanchang Ron Graham Peter Winkler when they talk about directed addressing directed graphs. Okay. So what I wrote before it's essentially you know it appears in the ground Pollack work and it was, it's rephrased in the kind of it's an equivalent in any. It's an equivalent form of that of what I described before. It appears in a matrix equation form and in the Graham and Paula paper, one of the papers they attributed to another article which appeared in a bell systems journal. And it's not very difficult, I mean, perhaps if you see first time you may be maybe a little bit too much notation and so on but it's these things are fairly straightforward to see that if you have a this by click partition of D here. It's D both for the distance multigraph and for the distance matrix of the graph. If you have this by click partition of this distance multigraph, and you put the a is the characteristic vectors of a one a to a K you put them in this tall matrix X. So it's an N by K matrix, and the characteristic vectors of the, the other parts be one BK you put them in the matrix Y. And the fact that you have a partition is a key, you know, it's equivalent with D equals XY transpose plus Y X transpose. Okay, so why is that useful, where it's useful because you can actually get a lower bound on K, which is pretty much the benchmark for most of the problems from that we'll see from now on. And the argument to so I would like to, you know, say something about K from this matrix equation goes as follows so I'll sketch it over there. Let's take W, a vector. So I for me, vectors are columns so it's in the, it's in the orthogonal complement of the column space of X. So that means that W is perpendicular on on on everything, you know in the in the column of X so that means that W transpose X is going to be zero. So, when I do the calculation, W transpose D, W, this is going to give me W transpose X, Y transpose W plus W transpose Y X transpose W. And this W transpose kills the first term on the left hand side here this is going to be zero, and it kills the second term on the on the right side because this is going to be zero. And so you get this fact that W transpose the W equals zero. And that means that W does not belong to the E plus of D, which I did not buy that the the the subspace, which is spanned by all eigenvectors corresponding to positive eigenvalues of D, because if you are a linear combination. If you are a linear combination, let's say the W is non zero. So if if you're a non zero vector which is a linear combination of of vectors corresponding to to positive eigenvalues, then when you plug these W into W transpose DW, you're going to get a positive quantity. And that that basically proves that last part. So, essentially, the goal of the first line is to to state the second line to prove the second line which is that the orthogonal complement of the column of X, and this, this subspace span by all these positive eigenvectors is just a trivial intersection. So if we take a dimension of the whole space is going to be bigger than the, than the sum of the dimensions of these two, two subspaces, and the dimension, the, the matrix X was, and by case so the dimension of the orthogonal complement of its column space is a list and minus K. So from this argument, one gets that K must be bigger than the dimension of the subspace span by all the positive eigenvectors, which is n plus D, which is the same thing as the number of positive eigenvalues of D taken with multiplicity. And by a similar argument, you can basically just replace plus with minus in this subscript, you can prove that K is bigger than then and and minus D. And so what does it give it gives this result, which again Graham and Pollock. They give a proof but they also attributed to Whitson housing, there's no reference there. And they say that basically if you have a graph with distance multi graph slash matrix D, then this parameter N of G that I mentioned, you know, from the beginning is greater than the sum between the number of positive and negative eigenvalues of D. Okay, so yeah that's a interesting connection between a kind of, again a purely combinatorial parameter on the left hand side and this, this spectral algebraic parameter on the right Okay. All right, so what else did Graham and Pollock do. Well, they look at the obvious suspects for graphs so if you look at the complete graph. As I mentioned earlier, the, the distance multi graph of the complete graph. It's the same thing as a complete graph because any two distinct vertices are a distance one. So the adjacent, the distance matrix, it's the all one, it's all one everywhere except on the diagonal where is zero so it's all one minus the identity. If you calculate its eigenvalues, there are n minus one with multiplicity one so I put one as an exponent, meaning multiplicity and minus one multiplicity and minus one. So the maximum between so n plus is one and n minus is n minus one. So that maximum is n minus one, and this proves that the complete graph cannot be addressed with words of length and minus two or or less. A result which can be phrased in, in terms of by click partition, and it states that the complete graph cannot be partition the edge set the edge set of the complete graph on an vertices cannot be partition into n minus two or less by clicks. And in the ground pollock paper they mentioned that this is a result for which they don't know any, any combinatorial proofs. There's several proofs of the result now by peck by fairberg. There's some proof which claims to be counting proof. It replaces a dimension method by the by a pigeonhole principle. Morally I think it's still a it's still a linear algebraic linear algebraic proof but yeah depends on how you think about these things. So this gives the lower bound so again the linear algebraic is proof of method is proof for the law is used for the lower bound for the upper bound again because the terming in this end of Karen is the same thing as partitioning the the edge set of Karen into into by clicks. It's a matter of finding constructions and it's easy to find actually to partition the complete graph on and vertices into n minus one by clicks, perhaps the simplest way is if you have the, let's say if you have a graph on four vertices, you can write it as a union of stars, you have that star, and then you can do this star, and then you can do another star, but at the bottom, but there are many ways and there's an exercise in the there was one in the older version of the Baba and Frankel notes. I don't know if you're aware that in March 2020 there is some new version of the Baba and Frankel if you Google it you will find the PDF file. And they have an exercise I think it was one point four point five, in which they ask that there are more than two n minus four to the n minus four non isomorphic decomposition of can into n minus one these john by clicks. It's, I think it's an open problem to determine the exact number clearly for n equals four that's not, that's not the number that gives you one, and they're there, there's another one in which you can use. There's another decomposition in which you can use a K 22, and then you use to one edge at the, at the bottom and one edge at the top and one edge at the bottom. So, there are many ways but yeah I don't think there is a formula known. Okay. So like the trees so for trees they actually show that something interesting they show that the distance matrix of a tree on an vertices it's determinant does not depend on the tree. It's always this this value that I wrote there and they use some some some properties this result to to deduce that the minimum length of their of the addressing for a tree the optimal length is n minus one and actually you can do the trees they do a recursive procedure. You can do trees with just words of you can address them optimally with just words of containing zero and one no stars. Later on, Graham and Graham and love us also study the distance matrix find a formula for the inverse of the distance matrix and some other things related to its characteristic polynomial. More about this image is a recent paper by Chaudhury and Karen 2019 there's also paper I forgot to mention about several authors in which they proved some conjecture about for Graham and love us about the coefficients of the characteristic polynomial of the, of the, of the distance matrix. Cycles, they prove that the optimal length of addressing cycles is n minus one if the cycle is odd and over two if the cycle is even and you have there some examples for n equals six and n equals n equals five. So that is done. What they couldn't do was something for general graphs so in their paper they have a decent addressing scheme, and they get this upper bound that N of G is the most thing that's something like maybe related to the vener index the sum of all the degrees between the vertices which is the most a diameter times n minus one. Later on, Ron Graham conjecture that actually for any graph and of G should be a most n minus one and this was became known as Graham squashed cube conjecture and was proved in 1983 by Peter Winkler. It's a very nice paper it's also a chapter. It's half of a chapter in the book a course in combinatorics by Van Lint and Wilson devoted to to Winkler solution to this. So any graph you can you can address it with less than n minus one, but you know determining the smallest value we we don't know. Here I'll explain what's the reason about squash cube conjecture what is it called a squash cube. Well, the star, whenever you see stars you can you can think of it that squashing some part of a cube so for example you here. Start at the cube that you have there in the in the center of the picture so it's a three dimensional cube in which the vertices are labeled as we all know they're labeled by this binary words of length three. And then the first step is to squash that face on the right which has one in the first position and he has anything on the second and third position. So once you know they do this is from the grammar, one of the ground and pull up papers. So you can see that. And what that means, it means you create a vertex, which, which is denoted by one star star. Okay, then the second step will be you squash the, the edge, which is denoted there by the one on the left, which is noted by zero one star. Connecting zero one zero to zero one one. So when you squash that you get the label zero one star. And then you leave the other two vertices alone and you end up with a with an addressing of the of K of K four, in which the you know you have basically you have squashed these four vertices from the C four into this one star star you squash two other vertices into zero one star, and you left the other two vertices alone and this is a valid address. So that's kind of the, the meaning of this kind of these names squash squash cube. I mean, for other graphs, you know this was known from a gram and Pollock, then recently you can you know ask what do you do for you know your favorite graph like Peterson graph. And this is a paper in discrete mathematics, because the eigenvalue bound gives you five. So the, the Peterson graph, this maximum of the number of positive and negative eigenvalues of its distance matrix is five. So it means you can you have to, to address it to the least five length five. You know, so these people, David Gregory was one of my advisors, they tried hard to find with five they couldn't do it. So we found one with six and they, they proved they, they, they prove that equality cannot happen in the eigenvalue bound. So the, the, this number and for Pearson graph is has to be strictly greater than five greater equal to six and this is an addressing in which the labels are kind of change a stands for zero B stands for one and zero stands for star. Okay, that's, that's again I copy this picture from there from their paper. It's open for other kinds of graphs I'll tell you in a little bit about other, other related graphs but other kinds of graphs outside the complete graphs. Okay, so it's, it's finding this optimal addressing is is not. So I want to talk about some results we did a few years ago and this includes one of my undergraduate students so Michelle Mark Kavits. She was an undergraduate at UD. And we wrote this paper with her and some collaborators she now works at the NSA. And so we addressed hamming graphs which is perhaps not so surprising that you can find the optimal addressing. So what is a hamming graph so I won't I'll just tell you the definition HDQ, you have an alphabet of size Q. So Q is greater equal to two, and you have words of length of length, D, and to over this alphabet. The words are adjacent if and only if they differ in exactly one position. So in this in, in my picture, my words would be, I would have three words so I have 0000102, and then I would have 101112, and then 202122. So they're adjacent if they're in the same row, or in there if they're in the same in the same column so I kind of set them the other way around. And so for these graphs, so these graphs HDQ. So you have X1, XD with each X taking. So you have Q to the D vertices. And you can prove that I have no surprising that the optimal addressing has been the times Q minus one. So the lower bound. In two ways, one way you can actually calculate the distance spectrum of HDQ, because these graphs are are distance regular graphs. And so their distance matrices are are polynomials in their adjacency matrix. And so the distance matrix you can write it in it, you can write it in a nice way in terms of the adjacency matrix and you can find its eigenvalues. So you can see the that the distance spectrum has one positive eigenvalue and one negative one which has multiplicity DQ minus one, which gives the lower bound here. And for the upper bound we just had some constructions. So that addresses the these graphs. So it's not surprising because again HDQ, think about it is the card is this box product of KQQQ and each of them you can address it with with Q minus one symbols from from Graham and Paula. So we also looked into so our results we have different method that we can deal with box product of complete graphs of different sizes, and you can prove determine the optimal length of their addressing. And we have this result that you know if I if I glue the addressing of G1 together with the addressing of G2 and GL, I will get an addressing of of of G, and that gives me the this upper bound of energy. So some of my examples were this kind of strange that inequality is strict. So, perhaps that maybe it's not so so hard, but we have we haven't done it. We found examples where you can we can prove that it's, it's the quality if each JJ is an addition regular which means that the the distance matrix of each of these graphs has constant row some, and in addition, the, the minimum, the optimal addressing is, is attained by the number of negative eigenvalues then we can we can prove it but in general we don't know if there are examples where this is strict inequality. And the other graphs which are Johnson graph so Johnson graph J and K, the vertex set here are all the case subsets of offset with an elements, and you have a is adjacent to be if and only if the intersection between a and B is size k minus one. So we differ in in one one element. And, you know, they're very well studied graphs in in in graph theory and algebra graph theory and so on. And you know their eigenvalues they're given by this, they can be calculated and in general they're given by this everline polynomials and so on. And for the distance spectrum, you can you can calculate it was done by a tick and bunny granny right in 2015 but also appears in a paper by cool and inspector of very nice paper in 1994, in which they were classified all distance regular graph whose distance matrix has exactly one positive one positive eigenvalue. So you can get these eigenvalues and you see again the same situation one positive eigenvalue of the distance matrix, or multiplicity one and one negative or multiplicity and minus one. So the eigenvalue bound for this graph gives you n minus one. And in our paper we could not just a little bit and you can improve the bound you can show that equality doesn't happen. And so the lower bound is at least at least and then I gave this problem to another undergraduate so here is a picture taken in fall of 2019 so year and so ago. So Noga is now a Princeton and he came to give a colloquium in our department. And so I was there with him and Brandon Gilbert is is the undergraduate who work with us on this problem. And another undergraduate from Singapore is our Quang Tan, who worked with me on a on a different problem. So we, we studied the Johnson graphs, and I want to tell you now about the ideas of the undergraduate Brandon Gilbert because it's really nice. So we could find an upper bound. So in this we have it in this paper, and in. So Brandon wrote a senior thesis about this that you can address the Johnson graph with words of length k times n minus k. And these are kind of again I took snapshots from the paper. And to do the addressing I'll go quickly through it and it's, it's quite, it's not as the simplest addressing scheme that that you've seen, you define a function from the case upsets from the vertices of the Johnson graph to this Cartesian product between the complement of K and K. So that's where you this function, you know, the, you know, that that Cartesian product has cardinality k times n minus k. So you define it is if you take a set s, which is the same as K, then the function outputs the empty set. Otherwise, if s is not K, then you put in one part, the in a you put the elements that s has that k doesn't so the ones that are bigger than K, and you order them increasingly x1 x2 x2. As you see there, and in B you put the things that K has an s doesn't, and they're also because both K and s have cardinality K, they're also going to be T elements there, and you order them increasingly, and then f of s is going to output this kind of extreme type of pairs, you pair up the largest x1 with the smallest y1 and so on. So you have an example example there. So, then the addressing scheme goes as follows it's an algorithm that Brandon came up with. So you basically you input the vertex s, which is a case subset. And then, basically the the the letters in the in the word, the word is this a of s of associated with s are indexed by these pairs x, y, and where x comes from the large the the complement of K and n and y comes from K, and it's done by this this procedure which yeah if you've seen the first time it's it's not the easiest one to parse. Essentially what what Brandon did he ran he did computations for these parameters for this optimal addressing. And then his program would always give him this value k times and minus k for small values. And then he basically reverse engineer the this is the algorithm that he came up with that produces his, his, his, his computation. And then he proved that this is actually an optimal, an optimal addressing. And I can give you, I mean it's not again, you know, I'm throwing a lot of information at you all at once, but essentially you you basically you have, for example, here when any case three, the letters in these words are indexed by these pairs for one for five one six one and so on there are nine, nine such border pairs. And for each order pair for and for each vertex set here like 123 you apply the algorithm from before and you you come up with each, each, each letter so zero exponent to means that that letter is zero because you ended up applying the second part of the previous algorithm and so on. So, the output is all on the right, you get, you get this, this addresses of length and for any length nine for the Johnson graph JC 63. So, then, again, I'm not going to go through if we prove that this, this is correct, this gives an optimal addressing. Okay, how good is it. Well when K equals one J and one is the complete graph so gives you a minus one, which is okay. She's correct K equals two we did it by computer. You can prove that it's correct for n equals four five six, you know you have some of these graphs here, you know, J four to six vertices J five to it's a complement of Peterson is 10 vertices J 62 is 15. And then you kind of stop at J 72 we don't know what happens after that. Also, it's not our construction. It's not sharp in general because we want to start working with Brandon Mackay. Also on this, on this paper, he ran a did a computer. His algorithm gave us an addressing of J 63 of length nine. So you have a lower bound for this Johnson graphs you have a lower bound of n, and you have an upper bound of K times n minus K so I don't know what the optimal, optimal values. In the paper also we looked at. And this, we look at the addressing random graphs. So this is a quote from a survey of Rome Graham in 1988. R of G. It's the same as our parameter and of G he did not in that paper by R of G and he asked this makes a statement that you know we don't know how it behaves for random graphs. So it's actually perhaps is natural to guess that it should equal, you know, number of vertices in the graph minus one for almost all large graphs so remember that Peter Winkler proved that these are of G is always at most the number of vertices of in the graph minus one. So if you go around would guess that that's the case for most of the graphs. So we with Brandon, and you know he learned naughty and the, you know, the Brandon Mackay computer program and so on. And he did the computations I think up to n vertices and so on so you see the n is the first column then f of n is the collection of all connected graphs on and vertices and labeled. In vertices you have one one graph, three vertices you have two graphs and so on, girls quite fast and then it's a distribution of this parameter. So you see like when n equals four. You have the, you know, graph on four vertices, and it's only the cycle C for that has an optimal addressing with of length two so that's why you have that that one over there. And then one thing if you go at eight, you know it's it's this one over there it's because that's the cube of, you know, three dimensional cube. And then what we notice is that it's not the last column of n minus one that is the largest. It seems like there's a lot of power in the other in the other columns. The largest for n equals 10. This is what Brandon Mackay did he did the computation you know there are many graphs too many graphs so he picked one, you know, a chunk of them. And, you know, again he obtained his distribution and again it's not this one that's the largest there some some other ones there. So what's going on, and this is essentially I mean I remember this from a while ago that through. I think she think who was your, your student and university of South Carolina, but an argument of a lot of alone, and basically no gal on in, you know, gave a proof that for almost all graphs, you can addressing is is away so kind of its answers negatively grab question. n minus one it moves away from from n minus one by by something. And the argument I'll just sketch it here is that you, you pick a K which is about two times log n a little bit less. And from previous work of Noga alone. The graph GN zero five or almost surely contains any copy of a graph on k vertices. And the graph that he's interested is is like a grid. I mean you can take a grid which has like square root of K square root of K, like this like the hamming graph that I was talking to you about earlier. And so you have k vertices there and you have n minus k vertices here. And now on this square grid square root of K with square root of K, the distance between the vertices are zero are one or two. In the random graph also the the distances are going to be just just one and two. And in here we can cover all the edges using roughly two square root K of edges, a two square root of K by clicks. And then here for the rest of the the edges in this distance multi graph, we use about n minus k plus one by clicks. So at the end, we get we end up with a with an upper bound of the form. n of G lesser equal than n minus k plus one plus two square root of K, where K is about two times log in base two. And that's exactly the bound over there. Now, this is a problem that's still open what's the behavior of this parameter for random graphs for lower bound the only thing that I know is you can use this big no semicircle law and so on with the Eigen values. And you have about n over two Eigen values that are positive and negative and then you, you get that the lower bound is n over two minus lower term so you know findings, you know is this n of G closer to the lower bound or to upper bound. Nobody knows they're related research on the bi-click partition of the random graphs in the round G and P. So the difference between them is that in the bi-click partition you have a random graph and you're trying to minimize the number of bi-clicks that partition the edge set of that graph. In the addressing partition n of G is the minimum number of bi-clicks that decompose not the graph itself but its distance multi graph. So the random graph has diameter two or more surely. So it's distance multi graph. It's going to be what it's going to be the graph itself plus twice the complement. And when you turn it around, it's just going to be a complete graph on which you put on which you put a random graph on top of it. Yeah, that seems to, I'm not an expert in random graphs by any means, but that seems to to mess things up and the results are perhaps less stronger than in the realm of bi-click partition of random graphs, where there are all these works of Alon, Fan Chang and Sam Peng, Alon Bowman and Huang and so on. Now, I'll give you another question that it's wide open. So it's a graph for which the values are far apart and this is a cocktail party graph. So this is a complement of a matching on m edges, which I do know by mm in a complete graph on two m vertices. And this originates in the problem in geometry about packing boxes such that they touch in a certain dimension due to zucks. And he came up with these constructions, like some recursive construction of order three m over two. And in the initial paper, he used the eigenvalue bound to get the bound of m plus one or something like that. And later on Hoffman improved that bound to to this value over here, but still there is a huge gap, huge gap between them. The problem here is that the eigenvalue bound is, is, is not close to the construction. Because these graphs, k to m minus m has many zero eigenvalues. And so this maximum between m plus and minus one is actually m plus one. And then Hoffman did this amazing result to actually improve it by square root of two m but it's still it's a big gap between them. And I think when about 10 years ago when this is a problem I gave Mike, Mike Tate to start, you know, doing research and so on. And I think at that time, he managed to prove that the upper bound gives the right value for for several small values of m. But again, this is still wide open. So I talk, I said about graphs and hypergraphs. So let me deal with hypergraphs. Well, addressing for hypergraphs. I mentioned in the title is addressing graph and hypergraphs this problem addressing hypergraphs I don't know any of it, but like motive again motivated by this by click partition and decomposition, you can study similar problem for for hypergraphs. And we can phrase it as follows, FRN is the minimum number of complete our part type are uniform. The partition, the edge set of so can our so K and are the vertex set has cardinality and and the edge set has cardinality and choose our and when like FRN we're looking at partitioning all these at hyper edges into complete our part type are uniform hypergraphs. So for example when our equals three, our complete three part type three uniform you have three pairwise destroying the subset, and you take all the triples that have one one endpoint in each of them you take all of them, and you want to ask the question. What is the minimum number of such things that partition the the edge set of of the hypergraph. And this is a paper in, I guess, the second issue of graphs and combinatorics. And so, as mentioned earlier in the previous slide, the Gram and Pollack theorem is that F2N is n minus one. And Noga alone proved that F3N is also n minus two. So it's linear F2N and F3N are linear. But after that they're no linear and actually FRN is essentially like a order of magnitude and to the power r over two floor of r over two. And this is the the abstract of his paper so he's lower bound use essentially a hypergraph version of the Tverber proof like come up with some system of linear equations and so on. And his upper bound was a recursive recursive construction. The lower bound gave this value of, which I wrote there later on with Andre Kungen and Jacques Verstrate, we improved the bound but not in, in the order like the largest term, the, in terms of this end to the K. And the coefficient in front of it, it's, it's the same as, as from, from Noga, but with perhaps a simpler proof using reducing the problem to a bi-click decomposition relating it to a bi-click decomposition of Knazer graphs. And this is the best lower bound that is known so far. So, I don't know any improvement of this of the lower bound. It's all linear algebraic. On the one side, like I said, no alone gave a construction with recursive. You can give we give in a paper a simple construction of F2K being N minus K choose K, and later on with Mike, we managed to cut off some of it but again we couldn't cut off anything and to the, to the power K. So in general, you can think of it as FRN is being less than or equal than one plus little of one times N choose this R over two. But recently, Imre leader and his students and collaborators, they managed to make progress and improve the upper bound. So the upper bound they improve it for every even they basically the coefficient in front went from one to 14 over 15. And they looked at, in general, what's the, you know, the coefficient that you can put in front there so you can have an inequality of this form FRN less than CR times N choose R. They make some, they prove some other results about this, these parameters. You know, they have some, some very nice, very nice results. I won't go through them here. Just to mention that the construction is kind of like a long recursive construction but they make an improvement in a certain in a certain place when they decompose Cartesian products of edge sets of complete graphs into Cartesian products of edge set of complete by paragraphs in a better way. So again, it's not known F for N for example, it's between 14 over 15 and choose to, and on the other side, I think it's interest to divided by three. So it's a big big gap, you know, one over three versus 14 over 15. So there are many variations. So I'll just mention a few. You can switch again, motivated by work of socks in geometry. You can switch the problem a little bit to say okay BP one two of can minimum number of by clicks that cover the edges of can such that each edges cover once or twice. You can cover exactly once and minus one is the best you can but if you want to cover once or twice. It turns out you can do it with two times square root of n and the linear algebraic method of a bound of Huang and Sudaco. It gives you a lower bound of square root of n minus one. But I don't know what's the exact value of this. So with Mike, we looked at this problem and, at least for small values of n the upper bound was the right one. Yes, but I don't know how what's what to be the exact value in general. And you, again in the paper with Mike, we studied this parameter for other list you can do BP 13, which is the minimum number of by clicks that partitioned the edge set of can such that every edge is one or three times. And this one it turns out to be linear, it's between an over two and four and over seven, but you can, you know, one to four behaves in a different way it's in term, it's order of magnitude the square root of n. Now if you want to like the the simplest set would be a list of with just one element. So in that case you look at this parameter BP lambda can, which is the minimum number of by clicks that partition the edge set. Of a come that cover the edge set of the complete graph, such that each edge is covered exactly lambda times. And this there's a conjecture of the cane Gregorian pretty keen from 93 that if you fix lambda them for and large enough this value equals n minus one. So for lambda, I forgot, most certain small value, and recently Rohat Gershel and Wellens prove an asymptotic version of it and they prove that if you fix lambda BP lambda is going to be one plus little low of m. So we cannot do n minus one but still this is a very, very nice result. So for direct graphs, there's no much known there for direct the graphs you can play this addressing game in which you have to. When you count the position in the addresses, you want to count just the distance you want that the distance in the in your diagram between X and Y to be the number of position in which X has a zero and Y has a one. So for this one is less work, I remember about the science meeting in 2012 or something in Halifax gave a talk about these results and so something that both and Chang and Ron Graham was said that nobody has done in much progress on this question since since their work so. Yeah, this would be again another interesting project to see if one can do more about these directed graphs. I'm almost done. I'm going to save so this is kind of tribute to Ron Graham and many of his contributions there's several very nice papers about his work and so on. Again, the ground poem theorem is my favorite results perhaps in in graph theory this, you know, the fact that you cannot partition it in fewer than an n minus two or fewer by clicks. There's a nice refinement of this result in a paper by a long broodian shader in 91, and they prove that in any partition, you remember we don't know how many they are that's the the Bobby Frankel exercise. In any partition into n minus one by clicks you can pick one edge from each by click such that you get a tree. That's really a nice result. And there's a conjecture, which I think it's still open of Dom Decay that actually you can find a path. So you can convince yourself in those pictures that you can find a path of green blue and and red, but yeah this is not known in general. Okay, so I'm going to stop here that's everything I had to say thank you again for for your attention. If we could all think our speaker in some way, either in the chat or, or Ava's got the little clap emoji there, then we'll open it up for some questions. Thank you. I got a quick question for you. Yeah. That question about for over and choose over two. Uh huh. Is it known to converge. No, I mean there is a monotonicity. Very simple thing about like, you know, F3 and an F2 and minus one, something like that because if I have a decomposition of the bigger one into triples. I just delete the part. Whenever I have n in one color class, I just throw that away. And that gives me a by click decomposition in the in the lower thing. And that's about it. I mean, other than that I don't know other. And if the two run numbers you have this thing with the two run densities that you can, you know, count here and so on and you get this, these limits and so on. I don't know if. Yeah, I don't think it's known for for this problem. And the same thing for the other one like I said BP one two is at most two square root of n, and it's at least a square root of n minus one. And it's known if it the ratio of that by square root of n if it converges to anything I yeah. Yeah, that one seems harder that far over. Over to my. Yeah, so bad activity or something. Yeah, yeah, I don't know. Do we have any other questions for our speaker. I probably have missed this, but back on that table where you, you click, are you, you look through all the graphs on small numbers of vertices. Was there some intuition as to why it was. I think there was a conjecture given on maybe where the center would be but was there is there some. Motivated or not motivation but is there some reasoning as to why you could think that would be the case or or. Well, yeah, I mean that's a good one. So. Yeah, I mean I don't know I mean this quote is from Ron Graham's, you know, paper. Unfortunately, I never asked him about this and you know I read that. But you know, Winkler proved that you can always are of cheese at most. Number of vertices minus one and actually his scheme gives you and minus one. Then I don't know I mean, yeah. You know what would be your guess about this parameter how what does he measure you know what I mean like, when it's small like you see it over there like when n equals eight. The smallest value is three. It's always like this end of G I didn't mention it's always at least log in base two of n or something. And so, in that situation your graph is kind of like the cubes with the cube it's very nice and so on but the random graph is not is not that nice or maybe as you know the. You know the values for the random graph should be on the other side of of log to end but yeah, I mean I don't have any in like intuition and so on about you know what is the exact value you know. Okay, I mean it's just it's just very interesting data I was. Yeah, this is I mean you know it kind of like more practical thing you have a student and you give them projects this came up okay. So it's a easy excuse for for for the student to learn not to go calculate these things and then when we got stuck. I was like okay we can email a Brandon Mackay and he became interested and could push this push this a little bit. And that's the other thing is that for this small, they're useful because they gave us intuition that is not n minus one, but it doesn't give you an intuition of what, what in the world happens for large and I mean I don't know. Yeah. Okay. Thanks. Any other questions. Okay, in that case. Thanks again savvy. Thank you for taking it out and have a good weekend. Thanks.