 All right, thanks everybody for making it out today. Today we're excited to have Chris Cox from Iowa State University who will be talking to us about the maximum number of paths and cycles and planar graphs. Go ahead and take us away, Chris. Yeah, well, thanks Drew. Yeah, so I'm gonna talk about some recent work with Ryan Martin. That's exactly about what the title says. Trying to maximize the number of paths or cycles in a planar graph. So in general, for graphs of G and H we can define N of G, H to just be the number of copies of H and G. So here we're talking about unlabeled copies. And by a copy, I just mean legitimately a subgraph. I don't care if it's induced or not. So a clique has every graph on that number of vertices as a copy. The actual question that we're interested in is a parameter called N sub curly P of N comma H, which is the maximum number of copies of the graph H that can appear in an N vertex planar graph. So the curly P here actually talks about the number, sorry, the curly P is just my notation for the set of all planar graphs. This is the same as the maximum of all the N of G H's where G is an N vertex planar graph. So this is just a parameter that we can study. And of course, the main question that I care about is the asymptotic of this function. So for various planar graphs H, what is the asymptotic of N P of N H as N goes to infinity? So what is the map? Yes. I have one question, N G H, by the copies you mean induced or just subgraph? Just subgraph. So the clique has every graph on at most that many vertices as a copy. Also people can feel free to turn on their cameras. I don't know why everyone just went dark. Now I'm just talking to myself, but okay. I think we're a little bit worried that it's distracting or sometimes we don't wanna be staring at our cameras all the time. Well, whatever you wanna do. All right, yeah, so here I just wanna know how many copies of H can possibly appear in the planar graph. One thing, if I were to replace P with the set of all graphs, so now I'm just maximizing the number of copies of H in A graph, that's a silly question because the correct answer is however many copies of H appears in a clique. So we should restrict the family of graphs that we're actually looking at. All right, so let me tell you a bit about some previous results. Well, really the first graph, the first results on this, although it wasn't phrased in this way, really go back to Euler with the invention of planar graphs. Well, NP of NK2, that's just asking what is the maximum number of edges that can be in a planar graph? And this is well known to be about three N. So one thing, I'm only caring about the asymptotics of this function. Yes, really, it's like three N minus six or whatever, but I'm just going to care about the leading term. Also, because K5 and K33 aren't planar, well, NP of those guys is just always zero. So those all just go back to Euler and anyone who's known anything about planar graphs knows these facts. But the actual study of this extremal function really goes back to Hakimi and Schmeichel in 1979, who are really concerned with how many copies of various cycle lengths can appear in planar graphs. So they said that the maximum number of triangles is asymptotic to three N and the maximum number of four cycles is asymptotic to N squared over two. A bit later, Alon and Caro actually generalize their four cycle result and were able to get the maximum number of K2Ks for K at least two. Also when K equals one, they were able to get it in which case K21 is just the path on three vertices. I didn't write that result down, mostly because I forgot. More recently, Wood was able to figure out the maximum number of K4s that can appear in a planar graph. And just a couple of years ago, Yuri and friends were able to figure out the maximum number of paths on four vertices and the maximum number of five cycles. So these were two different papers that were put out about two years ago. One thing that I should mention, I'm only caring about asymptotics here, but all of the results up to this point, there are actually exact answers, but I'm only recalling the asymptotics, but these numbers are known exactly. And just last year, Goshen friends were able to figure out the maximum number of P5s in a planar graph. And this is the first result that actually it is asymptotic. They did not figure this out exactly. So this is actually just an asymptotic result that they got. And they had a conjecture of what should happen for all odd paths. So paths on an odd number of vertices. And they conjecture that there should be four m times n over m to the n plus one, whatever that means. I'll explain briefly where this comes from in a bit. And this is really the starting point of Ryan and my work trying to figure out this conjecture. So what happens for odd pads on odd numbers of vertices? So let's talk about that a bit. So our results, talking about pads on an odd number of vertices. First off, we verify the conjecture of Goshen friends for the seven path. Namely, if you plug things into their formula, you would get four over 27 times n to the fourth. And that's indeed what we were able to prove. For all other odd-legged paths, we were able to significantly improve the upper bound. So one thing that I should mention is that the order of magnitude here, so n to the n plus one, is actually an easy result that I'm sure that any of you could sit down and prove in a couple of minutes. It's very simple to get the actual order of magnitude of this function. So we're really fighting for the leading constants. And in fact, in your proof where you would actually get, say, the order of magnitude, you would probably prove a leading constant of something like six to the m, which is pretty bad. So we're able to actually bring that down to n to the n plus one over about m-factory, which is pretty big. So as I said, Ryan and my motivation was just this conjecture about odd paths, but as we were working, we figured out that actually these same ideas apply to even cycles. In fact, the ideas apply to a much more broad class of graphs if we want to actually compute this parameter, but I'll just leave it at even cycles and odd paths for the sake of this talk. So when it comes to even cycles, if you remember Hakimi and Schmeichel already figured out the four cycle. So we figured out the next two cases, namely six cycles and eight cycles. And again, significantly improved the upper bound for even cycles in general. Again, the order of magnitude of n to the m is actually very simple and any of you could prove it quickly. But again, the constant you would probably get out of your silly proof is really awful. Again, it would be something like six to the m probably. And in fact, we have a conjecture all attributed to us, not because I think that we're the only ones who've thought of it, but because I haven't seen it in the literature, that this pattern that we see for six cycles and eight cycles should continue for all two n cycles. So the answer for the maximum number of cycles on two m vertices should be n over m to the m. Now, where is this actually coming from? Is the following construction. Let's start with an m cycle. So vertices one, two, three, et cetera up to m. And now take each edge and blow it up into an independent set of size n over m and connect all the vertices that you just blew up to both of the end points of the original edge. So now if we want to actually count two m cycles, well, we first lay down every other vertex at the vertices of the original cycle. And now for all of the intermediate vertices, we have n over m choices for each of those. So we have n about n over m to the m, two m cycles in this construction. And you can quickly verify that this is in fact a planar graph. I've drawn it. One interesting thing, oh, in fact, this is the exact construction that gives the lower bound of the conjecture of Gaussian friends. So that silly number of like four m, whatever, they think that this is also the asymptotic extremal example for paths on an odd number of vertices as well. One thing I should notice about this construction is it's not a maximal planar graph. It only has about two n edges instead of three n edges. So it's a bit weird that we're missing out on n edges that we could possibly still add to this construction, but asymptotically, it doesn't matter at all. This is really the base structure. One more thing that I'll mention about all of our results before I tell you a bit about the ideas behind how to prove them, is that instead of dealing with planar graphs, it doesn't matter, none of our results care that the host graph is actually planar. All we care about is that it has linearly many edges, which planar graphs do, they have at most three n edges, but in fact, any linear number of edges is okay. And there's no legitimate copy of K33. So planar graphs have no subdivided copy of K33 or a topological K33, but we're not even taking advantage of that fact. We're just going to take advantage of the fact that legitimately there is no actual copy of K33 in all of our results. So these are very broad. Should ask if there are any questions before I go on. So as I said, we were really motivated by the question of odd paths, but I actually wanna first tell you about how to get, say the sixth cycle. I wanna walk you through a sketch of this proof, mostly because, well, turns out that even cycles are easier than odd paths in many regards. So what I'd like to do is I'd like to sketch for you a proof of the maximum number of C6s in a planar graph and really harp on the main ideas behind us. All right, so we'll start by fixing a planar graph G on N vertices, and I wanna get a bound on the number of copies of C6 that can appear in this graph. Well, let's look at a C6. I'll draw one right here. So we have our lovely six vertices, great. And let's pick three of these vertices to be, I'll call them anchors. So we have some X, some Y, and some Z. Now imagine that we were to lay down these six, these three vertices inside of G and ask how many ways are there to complete those three vertices, those anchor vertices to a sixth cycle? Well, all I have to do is figure out how many choices there are for each of these vertices right here. Well, this vertex has to be adjacent to both X and Y. So the number of choices is the co-degree of X and Y in G. So I'll write just degree of X comma Y to denote the co-degree of two vertices. So the number of common neighbors that they have. Same thing here, this has to be adjacent to both Z and X. So they're at most degree of Z and X choices. And this one has to be adjacent to both Y and Z. In other words, we can write down a quick and dirty bound for this guy, namely, we'll sum over all triples X, Y, Z in the vertices of G, choose three, of the number of ways to complete those three vertices to a sixth cycle like this. So it's the co-degree of X and Y times the co-degree of Y and Z, times the co-degree of Z and X. And really, if we wanted to, we could divide this by two because there are two choices for the three anchor vertices. They could either be these three vertices or they could be these three vertices. We've counted each sixth cycle twice. So we could divide by two if we really want to. All right, now this alone is enough to get the order of magnitude. If you just play with this expression and use the fact that planar graphs, the sum of the degrees is at most six N, you'll easily get the right order of magnitude, but getting the leading constant is a bit more tricky. So in order to actually get the leading constant, we need to use two big ideas. So idea number one, idea number one is to partition the vertices of G into two sets. So the first set I'll call big and the second set I'll call small. Well, what is big? Big is the set of all vertices that have very large degrees. So let's say that their degree is at least epsilon N where you can think about epsilon as a very, very tiny but fixed constant. So that's why big is much smaller than small, at least in this picture, simply because since there are only linearly many edges, the size of big is about one over epsilon, order one over epsilon. So the number of big degree vertices is about constant whereas small is about everything else. All right, okay, I've partitioned my vertices like this and now the idea is to show that almost every six cycle actually alternates between big and small. So in other words, almost every single six cycle looks like this. It has three vertices in big and three vertices in small. Any six cycle which, you know, does anything else, there are very, very few of them. So essentially every six cycle is controlled by the big degree vertices. Well, if we go back to this picture, that means that I can count six cycles at least asymptotically by only considering these anchor vertices inside of the big set. So in other words, we can bound NGC six asymptotically by the sum over triples XYZ but now that are just big vertices. So that's the same thing. So degree of X comma Y, co-degree of Y and Z, co-degree of Z and X. One thing I should notice is that we do not get this factor of a half anymore because there's only one, this only counts, each six cycle is only counted once because it's only counted by putting the anchors on the big degree vertices. Okay, well I'm very happy at least with this reduction right now because before we were summing over N choose three different things. So N cubed different things but now the size of big is constant. So we're summing over only a constant number of things. Already I think that this is a great improvement. So we only care about actually anchoring our C sixes on the big degree vertices. And now idea number two comes in. And idea number two says that the co-neighborhoods of vertices in big, so if I look at all of the co-neighborhoods between big degree vertices, they are asymptotically disjoint. So there are very, very few vertices that are in more than one co-neighborhood of big degrees. In other words, what our graph really looks like, at least up to approximation, is we have a constant number of big degree vertices and in between them are living a bunch of small degree vertices in these kind of, well Ryan and I like to call them tumors. And if you notice this type of structure is exactly what popped up when I was telling you, you know, when you inject her for what should happen for even cycles, is that you have a cycle where you kind of blow up each edge into a bunch of vertices. Okay. Well, we're making some pretty good progress. Let's do the following. Let's define a function mu of x, y to be the co-degree of x and y divided by n. So mu is a function big choose to we're only going to consider co-degrees between big vertices and mu is asymptotically because the co-neighborhoods of the vertices in big are asymptotically disjoint. That means that if I add up all the co-degrees it's at most n plus a little bit. So dividing by n, that says that if I add up all of the mu of x, y's I'm getting about one. So mu is asymptotically a sub probability but seven quotes. It's a probability mass on big choose to and this is all because the co-neighborhoods of big degree vertices are asymptotically disjoint. So let's rewrite this bound in terms of mu. If we rewrite this bound, the n of g, c6 is asymptotically at most the sum over x, y, z in big choose three of mu of x, y, mu of y, z, mu of z, x all multiplied by n, q to take care of the dividing by n for each of the mu's. Well, this is fantastic. This tells me that now if I just want to bound this guy that this is at most the supremum and I'll tell you over what in a second over x, y, z in capital X choose three where capital X is just sunset mu of x, y, mu of y, z mu of z, x times n cubed where this supremum is taken over capital X a finite set and mu a probability mass on x choose two. So in other words, if we want to get an upper bound, at least asymptotic upper bound on the maximum number of copies of c6, well the n cubed is already there, that's the right order of magnitude and this will be the constant out in front of it the supremum of this optimization problem on probability masses. Now let's talk a little bit about this function. Well, mu is a probability masses on, you can think about it as x is just some vertex set and mu is a probability mass on the clique that has those vertices, I mean that's x choose two and what is this expression? Well, this expression right here if we were to write it down it's a constant, which is pretty easy to compute I think it's like one six in this case of the probability that if we sample independently three edges according to the distribution mu that they form a triangle. So in other words what we're really asking is I'm going to allow you to put some probability distribution on the edges of a clique and I want to maximize the probability that if I sample independently three different edges that they actually form a triangle and I want to know which is the best probability mass for this. Are there any questions so far? No. Okay, so again let's bear this in mind I want to maximize the probability that three edges sampled independently so it may be that I pick the same edge three times they're completely independent samples that they form a triangle. So let me just give you let me just call this a name I'll just call it beta mu and I'll call it C3 because again we're looking for triangles this will just be this definition x, y, z in capital X choose three mu of x, y, mu, y, z, mu, z, x. All right and now our goal is to show that the supremum of the beta mu C3s is at most one third cubed because again that is the constant that we're gunning for to show that it's n cubed divided by three cubed. Okay well one thing that you can notice with this if you think about it for a moment is that this is the exact answer that you would get if you took the uniform distribution on just the edges of a triangle to start with. So in other words we're trying to show that if I want to maximize the probability of actually getting a triangle then I should just take the uniform distribution on the edges of a triangle to start with. So that's what we're going to try to argue. So let me show you why this is true. All right. So let's fix an optimal mu and let's suppose that it has minimum support. So it gives very few edges have positive mass. Now you should go away Chris what do you mean optimal mu? This is a supremum how do you know the supremum exists? There is a way to argue that the supremum exists but that's actually not necessary to do the rest of this proof. You can use a silly little trick to get around this fact but let's ignore that for a moment and pretend that the supremum does indeed exist and therefore we can fix a mu that achieves the supremum just to make our lives easier. All right. So we have at our disposal an optimal mu and its support is as small as possible. So let's define what I call the support graph of mu which I'll just denote by g sub mu and the support graph is exactly kind of how it sounds. Mu is a probability mass on the edges of a clique. So g mu its edge set is precisely the support of mu and its vertex set is you know whatever whatever vertices those edges span. In other words g mu is just telling us which edges actually have a positive probability to be sampled. Okay. So I have the following claim about g mu and the claim is that every pair of edges in g mu are contained in a common triangle in g mu. In other words if I give you any pair of edges that have positive mass under mu then there's a triangle which contains them and also has positive mass. So there's always some way to pick a triangle that uses this pair of edges no matter which pair I give you. The astute among you will notice that if we can prove this we're basically done but I'll leave that for a moment. All right. So why is this true? Well let's define something. So for an edge let's say x, y let's define the function t of x, y to be the sum over all vertices z which are not x or y of mu of x, z times mu of y, z. So if we think about this in probabilistic language this is essentially a conditional probability. It's the probability of picking a triangle conditioned on already having picked the edge x, y. This is the probability of completing the edge x, y into a triangle essentially up to a constant. All right. So for the actual problem let's suppose not, suppose that e and s aren't in a common triangle. So e and s both have positive mass they're in the support but they're not in a common triangle of g mu. If that's the case then we can rewrite beta of mu c3 in the following way. Let's first look at all of the triangles which actually use e. Well those would give us mu of e times t of e. Okay. Plus mu of s, t of s. Now notice that we haven't over counted because by assumption any triangle which contains both e and s has zero mass. So there's no chance of picking it. So we only have to worry about triangles that are actually triangles in g mu. So these are the ones that use e. These are the ones that use s. And then there's I'll just call it the rest. Triangles that use neither e nor s. Sure. All right. Well we'll log we can assume that t of e is at least t of s. Fine. No big deal. Now let's look at a new probability mass which we'll call mu prime. And mu prime simply says move the mass on the edge s to e. So literally e the mass of e under mu prime is the mass of e and plus the mass of s under mu. And s now has no mass on it. Well let's see what happens if we compute beta of mu prime c3. Well first we look at all the triangles that contain e. Well the new mass of e is mu of e plus mu of s. And because s and e were never in a common triangle, this t of e function stays the same. It doesn't know whether it's under mu or mu prime. All right, plus the triangles with choose s are now zeroed out because s now has no mass. And the rest, and this is the same rest as above because again we've only changed the mass on e and on s. And the rest were triangles that use neither of those. But now we use the fact that t of e is at least t of s. So we get at least mu of e times t of e plus mu of s t of s. Plus the rest. And that's exactly the beta of mu c3. But that's a contradiction. Because we started by saying that mu is an optimal mass with minimum support. But we just constructed a new mass mu prime, which does at least as well as mu, but has strictly smaller support. Because, well, we moved all of the mass from s to e. So mu prime has one fewer edge in its support. So this shows us that every pair of edges in the support graph must be contained in a common triangle in the support. Are there any questions? Okay. Well, now we're basically done. Because there's only one graph that has this property. If every pair of edges are contained in a common triangle, then the graph must be a triangle. And possibly there are isolated vertices, but because we only care about edges. This just says that g mu is a triangle. Let's say that the triangle has vertices x, y and z. But now, if we compute it beta mu c3, which is the best one. This is equal to, well, there's only one triangle. So when we sum over the triangle, it's just mu of x, y, mu of y, z, x, apply the A and G M inequality. And now it's a probability mass at the top adds up to one. And there's our proof. So if we go all the way back, we've shown, where did it go? Now I'm losing all my pages. Okay, I don't know where it went. But this says the maximum number of copies of c6 is asymptotically at most we said it was the supremum over mu of beta mu c3 times n cubed. And this is n cubed over three cubed, just as we wanted. So this idea works for any even cycle. Namely, NP of n c2 m. This corresponds to the function beta of mu cm where this is exactly what you'd think this is some constant which can be computed times the probability that if you sample m edges, independently from the probability mass mu, that they form a copy of an m cycle. So in other words, if I want to figure out this extremal problem for the two m cycle, I actually just need to understand this optimization question for the m cycle, namely which mass maximizes the probability that if I sample m independent edges from it, they form a cycle. So our conjecture is that beta of mu cm is at most one over m to the m. In other words, it's maximized by the uniform distribution on the edges of CM. So far we only have a proof for m equals three. That's the one I just showed you. I showed it for m equals four, but it's more involved. So in other words, that's why we get the six cycle and the eight cycle. I think that it should be easy to get this beta of mu C five and C six, because in order to get the four cycle we had to prove a number of structural results on this mass. And essentially with those things you should be able to get it but I don't know how to get those two, but it gets more and more difficult as m gets larger and larger. Right. So one thing that I'll quickly mention is that there was nothing really special about cycles. What was really important is that C2M is the one subdivision of Cm. I subdivide each edge once of Cm, I get a C2M. And this general idea about relating this, this extremal problem to an optimization problem over probability masses really works for any graph that's formed by a one subdivision. There are some restrictions. But generally, you can phrase this correspondence for about any graph. There's like a subdivision. Are there questions? No. Yeah. Oh, okay. So I now want to actually tell you a little bit about pads. So if I bring back up our results on pads. We actually got the right answer for the seven path. So let's use the very similar idea. I want to give you an idea of, you know, the slight difference between pads and cycles. And I don't want to prove our P7 result because it's somewhat complicated. But I do want to re prove the result of Goshen friends who proved the asymptotic answer for the number of pads on five vertices. So this is very different than their proof. And it uses the same ideas for our cycles. So let's prove least sketch a proof that the maximum number of P5s in a planar graph is at most and cute asymptotically. This is Goshen. All right. So we're going to start by doing a very similar idea with the anchor vertices. So if we were to draw a P5, here's a P5 right here. We're going to anchor these two vertices X and Y and count how many ways there are to extend this to a five path. Well, there are co-degree of X and Y many choices for this guy. Degree of X many choices here and degree of Y many choices here. So we can bound N of GP five by we sum over all pairs X and Y in features to of degree of X, the co-degree of X and Y times the degree of Y. And we get to divide by two because there are two orderings on this path. Oh, sorry. I want to sum over just X not equal to Y let's say that way I can divide by two. So now we apply our two ideas. So again, we partition into big and small. And we show that almost every single P5 has these two anchor vertices in the big degree in the big set. So almost every single P5 looks like this. Automatically, we can improve the bound to say asymptotically at most the sum over X not equal to Y but now in big get to divide by two apologize. Degree of X degree of X comma Y times degree of Y note that here I actually get to maintain this factor of two, because there are still two different paths I get I get the same path. I pick this vertex first and then second, as I would if I pick them in the opposite order. And now we apply our second big idea, which is that these co neighborhoods are almost disjoint. And actually, we need a little bit more that says that, well these guys don't live in co neighborhoods these are just literally the degrees of the vertices X and Y. So we need to do a little bit more effort to say that actually the graph really does look like this blow up structure. So that the graph actually has this type of structure, as opposed to having some vertex that has a bunch of just little vertices hanging off of it. So we have an argument, but it's doable. So what we'll wind up getting is that the maximum number of p fives can be asymptotically bounded above by the following. Again, because they're disjoint we're going to change degrees into some probability mass. So we'll sum over X not equal to why in some set X of, and I'll define these in a moment mu bar of X times mu of XY times mu bar of Y times, in this case, n cubed over two, because we divided by n three times. So we get an n cubed, where again the supremum is just over X is a finite set, and mu is a probability mass on x choose two. And now the only term I haven't told you what it means is this guy right here. This is the weighted degree of a vertex if I think about the probability masses being weights on the edges of a clique mu bar is the weighted degree so this is the sum over all why not equal to X of mu of XY. In other words, if I think about it, if I sample an edge from the probability distribution mu mu bar of X is the probability that that edges incident to the vertex X. So if we actually want are there questions about I know I didn't tell you too many details, but the details are roughly the same slightly more complicated than for cycles. But we again get down to this optimization question over probability masses. The goal is to show that for any probability mass mu, the sum over X not equal to why of mu bar of X, mu of XY, mu bar of Y, that this is at most two. Because then that to cancels that to, and we get the correct asymptotic result of just one times n cubed. Well, how can we prove this. Well, let's define a matrix. So capital X. All right so mu, mu is some probability mass on you know the edges of a clique some x choose to let's define a matrix. Those in columns are indexed by X, where the XY coordinate is just the mass between X and Y. And we'll say where we'll just say that mu of X X is zero. So the diagonal entries of M are zeros. All right. So we're going to make a few observations one. M is non negative. I mean its entries are either zero or some number between zero and one. Two, it's symmetric, simply because the probability mass doesn't have an ordering on it. It's just the probability mass of the edge. And three, it's row sums are all at most one. Simply because the row sums of the M matrix are precisely the mu of the mu bars which are just a probability. A row sum is just the probability that an edge is incident to that vertex. Okay. Well number one and number three together tell me that the maximum eigenvalue of M is at most one. It's a non negative matrix, all of whose row sums are at most one so it's max eigenvalue is as well. And the symmetric component. This tells me that I know something about the Rayleigh quotient, namely, this guy is at most Lambda max times the inner product of X with itself. Well, and Lambda max is at most one. So if I take the inner product of over M so X inner product MX, that's at most the inner product of X with itself. So now we rewrite this new bar of X new X Y new bar of Y. And by rewriting it and thinking about mu bar is a vector. I mean it's a function from capital X to R so it's a vector index by X. And this is precisely the inner product of the vector mu bar with M times new bar, which is now at most the inner product of new bar with itself. And if we just write what this means this is the sum overall X new bar of X squared. But now mu bar is a probability. So it's at most one. So I can just delete the squared and only get bigger. The question is what is the sum of the mu bars. Well, mu bar is a degree. And the handshaking lemma tells me that the sum of the degrees is twice the sum of the edges. I mean in this weighted sense the sum of the weights of the degrees of the weighted degrees is twice the sum of the weighted edges. But the edges are a probability mass, which is one. The sum of the mu bars by the handshaking lemma is two. So we have in fact proved exactly what we needed to. All right. So, generally, so I want to reiterate generally, and P and C to M corresponds to the question of maximize the probability. So if you sample edges e one through E M from some probability distribution that they form an M cycle so a copy of C M. We can actually read you everything that we did with P five to show that NP of N and then here P two M plus one. This corresponds to maximize the following function. So the sum overall X one not equal to X M of mu bar of X one times mu of X one X two mu of X M minus one of X M bar of X M overall. Expression is slightly more difficult to actually write down as a clean probability like this. But it can be written as a probabilistic event. It's just these mu bars kind of make it a bit difficult to describe or more just not very enlightening to describe. And these are wide open pretty much. So, again, all we know is this one is good for well technically we can do it when M is two. So we'll say two, three and four. And this one is only good when M is either two or three, but they're wide open otherwise. What I mentioned is that this problem is significantly easier than this problem. For instance I say maximize here really it's a supremum overall probability masses in this question I can prove that the supremum is actually achieved so it's literally a maximum. In this problem, I have no idea. I have absolutely no idea how to even prove that the supremum is even achieved. And essentially the entire difficulty comes from the dangly leaves, the fact that these new bars show up make things very very difficult. And I have no good way around it. So at least when M is say five or six for the cycle question. You, I mean if you had a computer program that could do nonlinear optimization really well, you could solve this problem. Even if you had a computer that could do nonlinear optimization well, I don't know how you'd even start to solve this problem simply because I have no idea how many edges should even be in play. It could be that infinitely many edges, you know that there is no supremum and hence you keep getting better and better and better as I have as you have more and more edges. I don't believe that to be the case, but it technically could be. All right, so all in there. Please let me know if you have any questions. Thanks Chris. If we could all thank our speaker in some way and then we'll go ahead and open it up for for some questions. We have any questions for our speaker. Yes, I do. Yeah. Steered completely around it but I'm guessing you're completely aware that you just did a bunch of like graph on maximization problems right. The number of triangles was like your maximize the number of triangles and a graph on. I mean, well, you say graph on but we're in a very sparse setting. Yeah. So aren't the graph on trivial when you're in this because all of our results apply to only when you're grabbed so again planard never mattered throughout all of this it was only linearly many edges and no legitimate copy of K33. But don't graphons require you to have n squared edges to even exist and not be trivial. Well, I mean so there are these sparse graph models with like linearly many edges. And so maybe it fits into that paradigm but also like that. I mean this question about the way you have written there with the CMs. So that's over any new word any so all I'm saying so the new here is just a probability mass on x choose to where x is just some finite second x is some finite set. So this new it this new is a graph on that's true. That's yeah. Yeah, so the new itself is a graph on absolutely. I mean I know essentially nothing about graphons but the new is a graph on yes. So how is this not just maximizing and I mean of course there's all this like run up and hitting down the structure before you can use the optimization but one thing is that well I don't know if this again. I don't know anything about graphons. So let me preface it with this one. The one thing that might make it not a graph on is the fact that this x is literally finite. So I say that the mu is a graph on but it's in some sense not really because it's important that the vertex set is actually finite here. But all of the examples that we believe actually optimize this do have finitely many vertices. So you're not going to get better as you get more and more vertices which may actually hinder you representing this is a graph on. I mean in some sense it still is because I can take step functions right, but I don't know the right way to think about that. Another thing that's important here that I think actually makes this. I don't know again you tell me because I don't know about graphons is that one thing that could happen here when I select these m edges is that I may have picked the same edge twice. In which case I will never form a copy of CM. Well, a CM on every edges only picked once right every edges multiplicity one. And I have a feeling that well I don't really know with the graph on model if you could represent this clean possibly you could. But I don't actually know in the limit these, these kinds of, you know, the event that you pick something twice ends up. The main issue though is that in some sense, this new is not a limit object because we're actually gunning for a new that is very finite. Because this probability I can check sure that it's optimized by the uniform distribution on the edges of CM itself. Right so I don't know what the what limit object there even is. Another way to avoid the graph is single body line graph, and they consider the maximum large in the maximum what a large in the larger of the way it's just like a switch weighted. Some equal to one so if you consider the learning of the graph and maybe you can apply that. But is it this because if I'm in a line graph triangle in a line graph, well, doesn't correspond to an actual m cycle does it. No, it doesn't because I may have revisited the same vertices when I pull it back to the original graph. Yeah, I think so you probably you have to like condition I forbid this. Yeah, probably I don't actually know I haven't thought about them as line graphs. Again, this is a very general problem as I mentioned for this guy right here. I can really replace C2M by any subdivision of a graph, provided that it's like minimum degree is at least three or two, two or three. Basically, so I have a one subdivision of some graph that doesn't have dangly leads like the path does. Then again we're trying to maximize picking a bunch of edges that they form a copy of the graph that we actually subdivided. This is an interesting question because well you might conjecture that the right answer for if I want to maximize the probability of getting a copy of H that the right thing to do is to take the uniform distribution on the edges of the graph is not actually correct for for infinitely many grass age. I have lots of counter examples. But for cycles I think it is true. And again we can prove it for three and four. Technically this problem is very slightly different when MS to because what what do I mean by a two cycle here. It's really I want to maximize the probability that when I pick two edges I actually pick the same edge. And that's what I mean by two cycle, which is clearly optimized by literally only having one edge in your support. So yeah, Josh. Yeah, sorry, I, I know essentially nothing about graph on the main thing that I would be saying for why. This mu is a graph on because I could think about there being infinitely many vertices, but at least for every, every one of the problems that fit into this situation I can prove that the supremum is achieved by something on not too many vertices. And there are not very many vertices in all of these maybe like the number of vertices is, I don't know, like twice the number of edges is like a strict upper bound, say, remember what exactly I can prove. And for the paths for odd paths. I can't prove the supremum exists but again the conjectured answer for this guy what is the maximum here. The conjecture that we have is that this is actually maximized again by uniform on the edges of an M cycle that this that this thing is optimized by uniform on the edges of an M cycle. Again, we can prove this for two and three. But if these are indeed the optimizers then I don't know what graphons would have to say about it, because there's no real limit object here, but perhaps I just don't know enough to say anything. My mental picture was having X set. Grow without bound, and then you've got a. Yeah, it point here is that theoretically X could grow grow without bound right, because in our reduction step. The epsilon needed to go to zero. So your number of big vertices may tend toward infinity. So theoretically the X could grow without bound. The kind of point is that in what we're saying is that if the X grows without bound actually what happens is that most of those vertices are just isolated. They're not incident to any edge that has mass. Your entire mass is concentrated on some finite sub graph. That should be what's going on in these problems it is going on. I don't have this problem, I have no idea but I suspect that it's also going on but I just don't have any idea on how to, but I'm not going to discourage you from trying to think of it find out a different way to think about this. I just don't have any idea how graphons held. Yeah, the sparseness is making it so that the limit as a size of the X goes to infinity is irrelevant. Another thing is that maybe I should point out is that this mu has nothing to do with planarity right it's literally an arbitrary probability mass on the edges of a clique, where we started with, you know, linearly many edges this mu in its support graph may have all of the edges having positive mass, who knows. What I thought is that taking, taking the ground set to be larger and larger and larger doesn't help you. You should really, and again actually this makes sense. I mean if I had a new that was actually spreading out all of its mass, but I want to maximize the probability that if I pick a bunch of edges it forms a particular sub graph. If a new spreads out all of its mass, then I'm going to pick these edges to be really far away from one another. I have no chance of forming this finite, this like very compressed object. So that kind of gives you an intuition on why you shouldn't spread mass out everywhere that it really should concentrate on some small parts. I'm curious more about like what's actually going on in these functions. The paper is on archive where we do a lot more than than what I talked about and study these types of functions more in depth. If you want to take a look. And maybe there's something there if you see not my hand waving nonsense when talking about this. Maybe you can get a better idea, but I'm great at hand waving nonsense. Do we have any other questions for Chris. Okay, if not, thanks again Chris and thanks everybody for for making it out and have a good weekend everybody. Thank you.