 Ladies and gentlemen, thank you everyone for, for making it out today. This week we have Hua Guo, and I'm not sure how that pronunciation works. It's great. Yeah. Okay. And he is a, he's currently a postdoctoral fellow at Technion Institute of Technology. And I believe it is at 930pm there right now. Yeah. So he has very graciously agreed to, to speak at the seminar at this very late hour for him. So I really appreciate that. And today he'll be talking about the PROC dimension of random graphs. So please take us away. Thanks a lot for the invitation and the introduction. Okay, I'm going to talk about the PROC dimension of random graphs. So it's based on joint work when I was a graduate student, a graduate student at Georgia Tech, based on my advisor, Lu's wonky. Okay, so before I formally introduce the definition of PROC dimension, let me give you some idea about what PROC dimension is about. And today it's about, it's some concept about complexity. So it's about representation, where representation means that we try to find a fashion or compare encoding of some objects. And this also relates to the decomposition problem where people try to divide an object into mean number of some similar objects. And PROC dimension is also a kind of dimension, where people try to invite an object into mean number of one dimensional objects. Okay, that is some, hello idea of PROC dimension. PROC dimension is actually a parameter related to all this concept. And it was introduced by Nostril Porter and Hodo in the late 1970s. And it's a natural notation, and there are many equivalent definitions where I will give the definition in the next slide. Before our result, previously there are many approaches have been applied to study this PROC dimension, including the algebraic ones, the combinatorial ones, and the information theoretical ones. Here we use some probabilistic approach. And it has been proved that it is NP-hard to determine PROC dimension for general graph, therefore you can see to determine the PROC dimension of graph is intriguing but hard problem. In this talk, I will give a result where we determine the order of PROC dimension of random graphs and it will resolve PROC dimension conjecture by Foudian counter. Okay, then we can move on to give the definition of PROC dimension. If you have a graph G, then the PROC dimension of the graph G is the main number K such that there exists a K-colorable clique edge covering of the component graph of G. So here I need to explain the definition word by word. So what is a clique covering of a graph? So clique edge covering of a graph is a collection of cliques in the graph so that they cover all the edges of the graph. So for example here, if you have a graph look like this one, then you can see those colorful cliques, they form a clique covering of the graph. Because each edge of the graph is covered by at least one of the cliques in the collection. So they are cliques covering of the graph. Next, if you have a clique covering, so what does it mean it is K-colorable? K-colorable means that we can color the cliques in the collection so that the cliques in the same color class are vertex destroyed. Then you need to notify that for this clique covering, it is three-colorable but not two-colorable because we can color them by three colors. And for this vertex, because these three cliques share the same vertex, then they must receive different colors. Therefore the clique command number of them is three. So that is the definition of product dimension. So that is minimal K such that there exists K-colorable clique edge covering of the company in the graph G. So this product dimension was introduced by Nostril, Polter, and Rodo. There are many equivalent definitions. So this one I give here is more useful for us. For brief take, we can just define the clique command number to be the product dimension of the company in the graph G. So that means the minimal K such that there exists a K-colorable clique covering of the graph G itself. That's the clique command number. So it is a clique command number. So you can build each clique edge covering as a hypergraph. So the vertex side of the hypergraph is just a vertices of the graph G itself. On the vertices of each cliques in the clique collection, they correspond to a hyper edge. Therefore you just need to color the hyper edge by different color so that if two hyper edge share some vertex, then they need to receive different colors. So that is another point why it is called the clique command number. So it is related to the clique command number of some hypergraph. So I hope everyone understands the definition of this clique command number. But later on we will talk about this parameter. So it's a good point if you have any question. So there's one edge in the graph. It shares, it belongs to two cliques, but that's inevitable or is that okay, right? The edge between that blue and red clique. So you mean, so for this edge? Yeah, because in this clique, this clique will color red, right? And this clique color red blue. So that particular edge that belongs to both, that's okay. It's just covered by both. It doesn't matter. Yeah, it doesn't matter. Yeah, the only requirement is that if they share the same vertex, they need to receive different color. And actually it's a good point. I will talk a little bit about if they are destroying it. All right, let's move on. So Frugian counter makes the following conjecture. The conjecture is that this hyper-probability, the PROC dimension of the binomial random graph gp has the order of magnitude and over log n for all constant p. So in some remarks, actually the PROC dimension of some graph can be as large as a minus one, which is much larger than an over log n. So this is where all states that actually for most of the graph, the PROC dimension is roughly an over log n. Because we know that when p equals one half, the binomial random graph gn one half doesn't mean it takes each advertised graph uniform at random. So if this whole of this hyper-probability, it means for almost all the graph, the PROC dimension is roughly an over log n. So this minus one case is very rare. To prove this conjecture, all that we only need to prove that the k-command number, which we defined in previous slide, has the order of magnitude and over log n. Because by definition, we know that the PROC dimension of gnp is just the k-command number of the convenient graph of gnp. We know that the convenient graph of gnp has the same distribution as gn one minus p. So if we can show this bound for all constant p, then we will be done. Alright, so then we will focus on to prove this k-command number. So first of all, the lower bound is simple and that's why we make such conjecture. So let's look at the lower bound. So what is the lower bound of the k-command number of gnp? So first of all, if we take our vertex with the maximum degree, they have so many edges, we need to cover them by clicks. And the largest click we can use is this one. They can cover so many edges. And for those clicks, they need to receive different colors because they share the same vertex. Therefore, we need at least so many clicks to cover the edges that incident to this vertex. Therefore, we know that the maximum degree of gnp is roughly n times p. And the maximum clicks in gnp is roughly two times larger than where the base is 1 or b. Therefore, as you can see, the lower bound is simple. We need at least so many colors to color the clicks in the click covering of gnp. So this is a lower bound and that is also why they make such conjecture. We need to prove an upper bound. So to prove an upper bound, we need to show that with high probability, there exists a click covering so that the chromatic click command number of that click covering is an overlocking. That is a difficult part because this kind of bound suggests that we need to deal with clicks of very large size, roughly log n, the size of log n. And to deal with click of such size is typically a hard problem. Okay, so let's move on. So is there a question about this conjecture? All right, now let's move on. So actually prove this conjecture by showing that with high probability, the click command number of gnp is overlog n for all constant p. This verifies a prep dimension conjecture by a Fourier counter. Next, we can extend this result to two directions. One direction is not just for constant p, we can prove some results and allowing p goes to zero as n goes to infinity. So that is one direction to extend our result. They can also extend our result in another direction, as Smith mentioned. So because here we focus on the click command number of click covering of gnp. So actually we can require the click covering to be a click partition, which means we require a collection of clicks that cover all the edges of gnp, but the edges in different clicks are actually strong. So we can get some edge partition variants of this result, which is stronger. So that is another direction. Okay, so what's the motivation for us to study this con problem? First of all, as I mentioned, when p equals one half, the binomial random graph gn one half just to take each overcast graph uniform at random. So this result states that for almost all the graph, the prep dimension on the click command number has other metrics over log n. So that is probably for almost all the graph. And another motivation is that to study this kind of problem, we need to cover gnp or decomposition gnp into clicks. So that is really to covering and decomposition problems. And in this direction, there are some previous results. So I will mention two of them. The first of one, you know that with high probability. If you take a minimum size of the click current of gnp, the free free summary provides the other method of that one is roughly n square over log n. For the counter in 2018 they provide if you take the minimum of the maximum degree of the click current gnp that is roughly over log n. So what is the maximum degree of click covering. So here we will the click covering again by a hyper graph, we will add a hyper graph. So that the vertex of the hyper graph, it's just a vertex of gnp. On the hyper edge of the hyper graph is the vertex side of the clicks in the click covering. So they provide the maximum degree of that one is roughly this one. So we have seen the click command number of the click covering is at least a maximum degree of that, because for all for all the hyper edges and to our vertex, they need to receive different color. Therefore, our result actually plus this result. Okay, so next I will talk about the bureau to prove this, our result, and that is another motivation, because we use two different random gradient approaches to prove our thought. Okay, so is there a question about zero. So let's move on. So as I discussed, we only to prove the upper bound, which is hard part. So in the upper bound, we need to show that there exists a click covering so that the chromatic index of the click covering is at most see over log n. So now our proof strategy contains two part, the first part we need to find a click cover. So we find such click covering by some semi random approach, which is also called rodent evil master so that we find a click covering see step by step. Each step we find some collection of clicks of it. And then we take a union of them to be the recovery. That's the first part. On the second part, we need to found the chromatic index of this click covering. The idea is that we found the chromatic index of each sub collection, and then we take the sum of them. And then we found the chromatic index of each sub collection by some random gradient algorithm, which I will mention later. So that's a hell of a little idea of the proof strategy. And I will talk a little bit about the details about. So first one, how do we find this click covering. So to find this click covering of GMP, we start with G zero equal to the banana random graph GMP. Once we have the graph GI, then we take some clicks of sets roughly eight times log N. And this is this one where PI is some parameter. We take out some random clicks. We take out some clicks randomly to form the collection of click. So it's called CI. And then we remove the edges of those clicks from GI to get GI plus one. And then we repeat. So that's what we call semi-random algorithm to get the click covering. So we do it for many steps. For some family step, we get the graph G capital I. And finally, we take all the edges of this graph to be clicks of sets two. And we take it into CI. And then we take a union of those collection of clicks. And you can see that this union of clicks, they really cover all the edges of GMP, right? Because each edge of GMP is in some clicks. Okay. And the key property here of all this semi-random approach is that for each graph GI over here, the graph GI actually behaves like a binomial random graph GMPI. Their PI is a parameter just mentioned. And it's decreasing exponentially. So it actually has this relation. And of course, there are some technical twists. One more thing is that we can guarantee that the clicks in the CI are almost actually strong. So that is a very good property. And by the way, because the clicks of each CI has stress, that's probably eight times log in. So as you can see, it can be really large as some log in. So they are really large clicks. Okay. So that's the idea about how to find the clicks in, how to find the key currency. So that is part one, the semi-random part. On part two, we need to bound the chromatic index of this C. To bound the chromatic index of C, as we discussed, we just need to, that is just upper bound by the sum of the chromatic index of each sub-collection. And if we can show that the chromatic index of each sub-collection, it's at most some kind of C times the maximum degree of the sub-collection. Again, we will this collection of clicks as a hypergraph, where the vertex is just a vertex, just from one to n. So the hyper edge correspond to the vertex side of the clicks in this collection. So this is chromatic index, and this is the maximum degree of that hypergraph. So if we can show this relation, then actually we can finish the proof. So why is true? So if we can prove this one, actually this bound holds. So why this bound holds? Because each collection CI, as we discussed, the clicks in it are almost actually strong. If we look at the maximum degree, it's roughly n over p. That is roughly n times pi over the click size. So n times pi is the maximum degree of the hypergraph GI, because we know that GI behaves like the binomial random branch in PI. So that's the maximum degree of GI, and that is the clicks in this collection. So the maximum degree is just as we have seen over this one. And for the last part, because we take all the edges of G capital I as a click of S2, so the maximum degree of that graph is roughly n times p capital I, which is this one, because we know PI behaves like this one. And by some expectation, you can show that the sum of the term is really bounded by this one. Okay, so if we can show this kind of bound, the common number of this hypergraph is the maximum degree of this hypergraph, then we have a complete proof. Here is some salt for the last step, because we know that GI is really a graph. We have this relation. The chromatin index of this graph is almost two times the maximum degree of this graph. This is true by reason theorem, because that is graph. But for I less than capital I, we really need to deal with hypergraph, because the click scene in CI has size really large. So they are really hypergraph. And to prove this kind of bound, we need some permanent transfer like result. So what is permanent transfer like result? So that's a result of permanent transfer. They prove that if you have an M or has hypergraph, which is K uniform, that K is a constant. And if you have some ifs and zero. And if you have a graph, it's nearly regular. So for nearly regular, it means for almost all, for all the vertex, the degree of the vertex are roughly the same. And if you, if the hypergraph has small code grid, which means for any two vertices, they are in a small number of hyper edges. So the program spends the pool as chromatic index of a type of graph. As I most one plus you saw, times the maximum degree of hypergraph. Therefore, you can see this type of bound is really some constant C types max three type of so that is what we want. But the problem is that we cannot use permanent transfer result directly, because in their result, the uniformity of the hypergraph K must be a constant. And we have seen in our hypergraph CI, because the case in CI size can be as large as log n. So that's not a, that's not a constant. So we cannot use permanent transfer result. That is the problem, but we can overcome this problem. Because our CI is a random set of pigs, you can use the random randomness to overcome the problem. So that is our hypergraph chromatic index result. So our chromatic index result for hypergraph stays as a following. So if you have a hypergraph host hypergraph H, which is K uniform over as hypergraph. So that uniformity of H. The K satisfies this relation. So K can be as large as some constant B times log n. So the host graph H is nearly regular, which means for all the vertex V of the hypergraph, the degree of the vertex is roughly, roughly D. And certainly it has small code rate, which means for any two vertices of hypergraph. They are in MOC. They are in much less than D number of edges. So if you host graph H set by these three conditions. Then for a random sub-hypergraph H, where you take m edges, uniform and random. Where m is now too small. And m is much less than the total number of your host graph H. Then we can prove that with a probability. The chromatic index of each time is at most one plus delta times the maximum degree of each time. Where delta satisfies this relation constant. So here the chromatic index of hypergraph means you need to color the edges of the hypergraph so that if two edges share a common vertex, then they have to receive different colors. So this is chromatic index of hypergraph. Okay, so the key point for our chromatic index result is that it allows the edges of random hypergraph to be as large as log, as large as bagel of log n. So that is the key point for our theorem. Okay, so as a color, if you uniformity is much less than log n, then we get the permanent Spencer like result. Because here you will have b equal to little of one, therefore delta is a little of one term, therefore you get one plus epsilon constant. But in our problem, we really need to use the bonsai, the uniformity is bigger of log n. And here b is not some middle of one term, but some constant term. Therefore, this is constant term. You've got the chromatic index of each time is at most becomes the c times maximum degree of each time. And that is what we want. Therefore, we can apply side. The chromatic index of this collection of peaks is the most c types of maximum degree of collection of peaks. Here, because we take the peaks from, we call that from G and therefore the whole graph actually satisfy these three conditions, so we can apply it. Therefore, we have proven our click chromatic index result. And the following time I will discuss a little bit about proof idea to pull this, the idea to pull this chromatic index result. So is there any question about this chromatic index result. If not, we will want to discuss the idea to pull this chromatic index result. So actually use some random greedy algorithm to color the edges of this random hyper graph each time by using so many colors. So why such so many colors is enough. Because the maximum degree of each time to random graph each time, because it's a random graph, it's roughly equal to the maximum degree of your host graph. And the fraction of edges. Because we take images from the host graph. So the maximum degree to satisfy this relation. And here because the host graph age is nearly regular. And the number of age is roughly n times the degree over K. K is uniform. Uniformity. And if you plug in, you can see that maximum degree is roughly this one. Therefore, it's enough to color each time by so many colors. And we analyze this random greedy algorithm by some differential equation method. So what is the random greedy algorithm to color the random type of graph. Here is the random greedy algorithm. And it's very simple. So assume Q is all the color all the possible colors that you can use to color the hyper edge on the random greedy algorithm work as following. For example, type you sample one edge of each uniform or random. And then you color the edge by available color. Uniform or random. So a color is available means the color is not used on any previous edge that intersecting with the edge you sample as a step. So for example, here is three uniform hyper graph. So four possible colors, red, blue, green, and yellow. The algorithm work as following. At first type you sample one random edge. For example, this edge. And at this moment, all the four colors are available for this edge. We choose one of them, uniform or random. For example, we color it by right. And then we move on to second step. We sample a random edge. For example, this edge. And at the moment, all the four colors are available for this edge. We choose one of them, uniform or random. For example, we color it by green. And then we move on. We move to third step. We sample a random edge. For example, this edge. For example, you only have two random, you only have two available colors, blue and yellow, because it intersects with a red edge and a green edge. So you only have two available colors that are blue and yellow. We choose one of them, uniform or random. For example, we color it by blue. And then we move on to next step. For example, we sample this edge. We get this edge. And at this moment, you only have one available color because it intersects with a red edge and intersects with a blue edge and intersects with a green edge. So you only have one color. You color it by yellow. And then you move on. So as you can see, this algorithm works. If for all the step, you have available colors. So that will be enough. Okay, so here our monthly inside with half of a baby we can color that matches properly by this cue colors. And we only need to show that for all the step. There are there are really available colors for each edge. That is true. Actually by differential equation method using the sort of random assumption of the host graph age, so that age is nearly regular, a small code degree. And we can prove that it's high probability for all the edges of host graph age and all the steps from between zero and nine. The number of available colors for the edge E after I step, very denote by this QEI is concentrated around this parameter to the power of K times Q, where Q is the initial possible colors. And that is enough, because if we can show this QEI is much greater than one, we will be down. We never run out of colors. And that is true, because as you can see this, this number is decreasing as I increase. And the minimum when I go to the maximum number of steps, I'll be plugging it will be this one. I'm because we have some constraint on the uniformity. If we take it over here, but some competition can show that it's much greater than this one. And because there are some relation between this constant, we can show that much. So if it is greater than zero, therefore it's much greater than one. It means we never run out of colors. It means our random radio algorithm can color the images and random mages properly as a few colors. That will be done. Therefore, as the moment we prove our chromatic chromatic index results for random hyper graph. Okay, so that is provide here. I will mention my open problem. So that's an open problem. Can we determine the synthetics of the chromatic, chromatic, clear chromatic number of GP. We call that we prove the man other a man to this over lock in. So here's a problem and it terminates a synthotic. So what is the constant factor over here. And then maybe someone will get a simple lower bound work, because that was a simple lower bound the chromatic next to the GP. So, because the simple lower bound that following. We need to, first we need to cover all the edges into this vertex by clicks. Therefore, we need to leave so many clicks to cover the edges. So we need to leave so many clicks in terms of this vertex into receive different colors. Therefore, the key command number is at most is at least the maximum degree of GP or the clean number of GP, which is roughly this one. So someone may guess, oh, this is a synthotic. So the constant factor of this. But actually that is not true. This function can be improved to the following kind of fun, which is where the factor can be improved to one plus some function fp, where the function fp is greater than zero. For the most interesting case when people one half. This function is equal to one, therefore, you can see there are some, the simple lower bound can be improved. It's not clear what the synthotic should be. Okay, so this is an open problem. So as a summary of this talk. And as we can be determined what is a product dimension of a graph that is the minimum chromatic index of the click covering or coming off. We prove that it's high probability. The product dimension of binomial random graph gmp has the order of magnitude over log n for all constant p. And our result verifies a contractual bad food encounter from 2018. To prove our result we use two random greedy approaches. First we use the semi random algorithm to find a click covering of GP. And then we use the random greedy algorithm to color the clicks in the click covering. And then we use the new tool for the second part random greedy part, we get some chromatic index result random hypergraph, which allows the edges of the hypergraph to be as large as log n, not just a constant. Okay, some open problem. One is as I mentioned, what is the synthotics of the product dimension because here we only determine the order of magnitude of it. So actually for some for P 10 to zero, we determine a synthotics of chromatic number. But there are some gaps in the probability of P. So it's, we are interested in is it possible to close the gaps. Okay, so let me stop here. Thanks for listening. Well, thank you very much. If everyone could thank our speaker in some way. And are there any questions for for our speaker. I did have one, one, I mean one question that I think you probably answered but I didn't quite catch. So it was with the. You had some. Let me. I'm trying to think of the right slide it was towards the end actually, you had some quantity raised to the k power. I wasn't quite sure what the K was there. Yeah. Yeah. Yeah, here case case the uniformity of the hypergraph. Oh, okay. Okay. I got you. Thank you. I wasn't sure is then. Yeah. I guess, okay that that part is fixed for that particular instance of the problem, or I don't know if it's the right way to think of it. Because if you can see case on function of the number of vertices here we allow it to be a functional. Nice. Okay, any other questions for our speaker. In one of the last few stars that you said that nobody is the one plus a for what is it for you. I can review. Not here. As the near and as you said, oh, I see. Yeah, I said that is some function. Well, we logarithm. I do not remember it correctly. But some key property you should know about this I feel that first, first of all, FPS always positive. So, this, this, this one is not true. The one is not true. That's one property. Another property is that when people one hot, where you take each number has graph uniform random. This one is one. So you have, you can improve to a factor of two, actually, the simple, the simple or. Do you believe it's like at people what have this should be removed the fact or two is that is that right. Conduct your something. Um, no, the story is like the following. So, you may guys seem to look like this one. But actually here, it can be proved. So in this one, it's actually the factor is one, but actually it can be improved to buy some other approaches. Yeah, I just said it's really the right. Yeah, if we want to make some contractors and maybe to correct. Okay, thanks. And another thing about this, I feel that this is an increasing function. When p 10 to one, they function 10 to infinity. Let me ask just one more time. Any questions for for Dr. Well, if not, then I'd like to thank you again. I know it's, it's quite late there. I really appreciate it. And. Yeah, thank you for for an excellent spot. I always like to see. It's always interesting when you can show the existence of something using algorithms. Yeah, thanks a lot for attending. And, you know, hi. So we just let me just end by saying next week is Thanksgiving break so we will not be having a seminar next week. But, yeah, everyone have a safe and and a well deserved break. So, thank you everyone.