 Okay, thanks everybody for for making it out and sorry about the email coming late this week It somehow I must have shut my laptop too quickly after I hit send and it didn't make it out of my draft So I apologize for that But this week we have Ryan Martin from Iowa State University who's going to be talking to us about splits with forbidden subgraphs Go ahead and take us away. All right. Thank you, Drew So this is joint work with Maria Aksinovich and if you can see this is a coffee cup I got from Kalsua. I spent the fall last fall in Budapest at the rainy Institute on sabbatical and I spent two months of the spring before COVID hit in Kalsua working with Maria and So our subject is splits which I will explain So a graph is called an NK graph. It's also called a K split of the complete graph If it has three properties first of all, it's N partite One of the things I want to emphasize is that N is no longer the number of vertices in our graph But it's the number of parts in this split There are most K vertices in each blob I'll use the word blob just because it's easier and more clear than using part and At least one edge is between each distinct pairs of blobs and for our particular problem We might as well assume that exactly one edge is between each distinct pair of blobs. So So for instance, there is an edge between here and here and it's this edge And if you look between every pair, there's exactly one So there's a little bit of history to this the sort of thing So Hayward can Hayward considered the smallest K for which some graph has a K split that is planar So here gamma is taking the place of KN and what we're doing And so what we do is we take a graph and then we split it but we make sure that the parts that Parts have an edge between them in correspondence with the vertices of this graph Gamma and that's called the planar split thickness and splits were used in bar visibility representations of graphs Hutchinson West and and my co-author Maria among others have done things in this area So what we were asking is sort of a standard Turron type question So suppose you're given a graph H and an integer n which is larger than the number of vertices of H What's the smallest K so that there's an NK graph with no copy of H as a sub graph? So you have this graph H which is forbidden and what you want to do is create this other graph with blobs so that there is At least one edge between each blob and since this is as a sub graph I'm free to delete any edges that are extraneous if I for instance were to have two edges between blobs I could just delete one of them and it wouldn't matter In fact, it would help me so I want the minimum K so that there exists an NK graph G Such that H is not a sub graph All right, so with that setup in mind Um, take a look at this particular example that I've been feeding you for the past couple of slides So it turns out it's bipartite instead of drawing it to be bipartite What I'm going to do is I'm going to color the vertices and I'm going to color one red and one blue and Notice if you look carefully through each of them every edge has a blue end vertex and a Red end vertex and that's exactly a bipartite graph So this one's bipartite well This isn't a coincidence. So, uh, barbunera and ucurt Made an observation in a in another paper sort of introducing this idea that If the chromatic number of this graph h is at least three Then the f we want to look at is equal to two So all we need is So all we need is a bipartite graph That is a split of kn And that's easy enough to create um So it turns out that this question Is highly trivial for any graph h that is not bipartite But like the turn on question It suddenly becomes highly non trivial when you are bipartite when h is bipartite Yeah, what's what's good. So what's going on with that result why I'm just having trouble seeing why that should be true Okay, so what you need to do is to take a graph on two end vertices And it needs to be you need to have an edge between each of the end blobs And it needs to be bipartite. So what you do how do you construct this thing is you take You take the sets say n sets take n sets color one of them In each one of them color one vertex red and one vertex blue And then just arbitrarily put an edge between them Such that it goes between red and blue Doesn't matter how you do it Then do it again for every pair of blobs So for every pair of blobs, you're going to get an edge between red and blue and again arbitrary. It doesn't matter at all And then when you do that you're going to have an n two graph You have two vertices in each blob. You have n blobs All the edges will go from a red to a blue But that graph is bipartite. So there's no copy of h if h it has chromatic number through your gray Yeah, so it's it's but uh, this is a good exercise because I think it gets gets you thinking about how this how this thing might work But it doesn't say anything about bipartite ages Okay, so let's observe what we have. Um, and there's actually a very trivial lower bound here Um So an nk graph is a graph on at most nk vertices We don't we don't care if one of the blobs is a little smaller, but we our parameter is always the largest blob And since g is a graph that has no h and has nk vertices Then it obeys this rule Okay, so again g Has n blobs. And so there are n choose two edges one edge between each pair of blobs And then we have the extremal number Magically sitting there, but keep in mind the extremal number is the total number of vertices All right, so of course if you replace this extremal with some expression that we happen to know for the extremal For instance, if the extremal is less than c times l to the b where b is some exponent And it doesn't matter if you don't know the exact term exponent It just has to obey less than or equal to cl to the b Then if you solve this equation for uh for k you get um You get a k that's at least c prime n over n to the two over b minus one okay, so So we've got a lower bound on the minimum here um If you if k were to be any smaller then it wouldn't obey this condition okay, and um There's another observation here that um h is always a subgraph of some complete bipartite graph And so therefore we can use kavari shush turn on to say that in fact There is a b less than two for which this holds and therefore um, we have this expression for f and h So we got a lower bound assuming some knowledge of the extremal function and by kavari shush turn on Um, we that holds at least for some constant b We're not really sure what the b ought to what the b ought to be But in any case, it's true. And so um, we got we have this condition and among other things This means f of n goes to an f goes to infinity as n goes to infinity Which of course was not true when h was not bipartite. So that's an interesting little fact and in general I I would say this is our main theorem that says that If you have a bipartite graph that's not a forest and such it for any sufficiently large l The extremal function is trapped between these things with these exponents a and b For some positive constants c and capital c and exponents a and b If b minus a is not too large Then in fact, we have these very nice bounds for f Um, it's an artifact of the proof that we need a and b to be close to each other um, I wouldn't I really don't know what the um Um What people canjecture as to whether the extremal function actually behaves like this or oscillates in some way Um, I'll give you some information about that later But notice that if a equals b we get a really nice tight result that only differs by a multiplicative long term So um, so that's nice and and let me go through what some of the things that we know some of the things that we know about turn on exponents so I got this from the fruity Shimanovic paper and the ball and pepe paper and yonzer paper and I just wanted to show that there were some examples for which Our theorem holds and some for which our theorem doesn't hold and just for an example, uh, c2k If k is in Uh, two three and five so c4 c6 c10 Doesn't work the upper and lower bounds are too far away from each other according to our formula but um c4 uh c6 and c10 do work for um for our theorem So does k2t k3t? I have k5t. I'm not sure why I don't have k4t, but um k4t I think follows from k5t k6t works But the bounds that we know of so far for kst where s and where t is not too much bigger than s Don't work, but then they work again if t is much larger compared to s And of course better values are known when t is huge compared to s But um, but our theorem still holds there for the cube. Uh, it doesn't hold Because the bounds we have on the cube are just too far apart and um This this expression this uh k kst um, this is um Relatively new result. This is from the yonzer paper and this is um subdividing kst um in a certain way But it happens to work in that case too Okay, so sometimes our theorem works. Sometimes it doesn't work It does work if you actually know the turn on exponent So the obvious corollary is if you have a bipartite graph that's not a forest So that um the extremal value is theta of l to the r Then there are constants for which f of nh is trapped between End of the 2 over r minus 1 and end of the 2 over r minus 1 times the log term So um, again, our our theorem works on a larger range for which the turn on exponents are known But this is a talk. I want to simplify the talk So we'll just assume that the turn on exponent exists And so where we happen to be in the situation of the corollary That um That the exponent exists and then i'll try to explain, uh, you know how some of this goes So There's a very important lemma that we use. Um, it's an old lemma due to Erdisch and shimonovich And it says if the extremal function at least the way we're using it if the extremal function Has a known exponent Then what we can do is we can find a graph that's not quite extremal It has half the edges of the extremal graph But the maximum and average degrees Differ by at most a multiplicative constant So there's no a priori reason that um, that the extremal graph um For any turn on property doesn't have one very large degree vertex and then um, and then Um, a bunch of moderate degree vertices But you can do a little bit of manipulation of of An extremal graph that you don't know much about but to guarantee that the maximum degree is not too big And so once we have that what we're going to do is uh The upper bound is really the key the lower bound follows directly from the first observation that I made That the extremal function you had entries to less than or equal to the extremal function So what I'm going to do is I'm going to choose a I'm going to choose a k value Such that this equation is followed so So ex nk Is greater than or equal to 12 n squared log n 12 doesn't matter Um notice, of course that we know that this is true That if you have n squared over two, that's automatically true for for If we have an nk graph but we um Here we're going to um We're going to let our k be large enough and then we're going to find a graph What what graph are we going to find it has nk vertices, which we label capital n we have the extremal number divided by two edges where We'll just say that that's capital m and then the maximum degree is some constant time m over n So two m over n is the Two m over n is the average degree So we're just going to let q be a constant and that puts us And we can guarantee all that from the air to shamanovich result from 1970 And so what we're going to do is we're going to take such a graph. We haven't created the blobs yet So let's create the blobs And the way we're going to do it is we're going to randomly partition The vertex set into blobs and not only that we're going to do it in the most naive way possible We're going to color each vertex with one of the n colors independently and with probability one of rent So seems like an obvious thing to do and we need to show two things In order for this to be for this construction to be an nk graph as as we would like it The first thing is that each blob has to have a reasonable size The size we're looking for of course is k But uh, we allow ourselves a little wiggle room and if we allow ourselves a little wiggle room It's very easy to show that if you partition a set randomly in this way the Largest size is k plus little low k But the other thing that's more difficult is that we have to guarantee this condition where Every pair of edges every pair of blobs has at least one edge between them And again in this case if we have multiple edges, that's great. We could delete them if we felt like it but otherwise We don't bother much Okay, and the problem with this is there is a slight dependency um, depending on what you think your Random events are but I would say the event is that two blobs have an edge between them Well, there's a there's a dependency on that because the the blobs are determined by coloring the vertices all right, so So what we're going to do is we're going to get around independence And here's how we'll do it. So we're going to let i and j be colors And we're going to let si j be the number of edges with color i on one end of vertex and color j on the other i is going to be not equal to j If i equals j, it's fine, but we don't care and what we're concerned about is the event that s i j is equal to zero If any of those events happen, it's a bad situation So we want to show that the probability of that is Strictly less than one I mean formally we want to show that it's less than half because we still Need that condition that all of the blobs are of size at most k But that again that happens with very very high probability. So Okay, so isometry it suffices to bound Just this expression the probability that the uh blob colors one and two Have no edge between them Because then all we are all we're going to do is if we can bound this expression Then we can take the union bound to bound this probability and if this is small enough then the union bound will We'll solve our problem for us So we've sort of reduced the problem We hope and we really have reduced the problem to just taking two blobs and saying is there an edge between those two blobs And the way we're going to do this is um, and maybe it's natural. Maybe it's not Um, is we're going to take an edge from this graph that we have the graph is deterministic We we don't have much control over the graph But what we are doing is randomly choosing the vertices So for every edge, we're going to denote the indicator of the event that the color of the end vertices of E i are one and two So what is the probability of? I i complement what's so i'm i'm asking Hoping and get a little feedback here So we have we have an edge And we need to know what the probability is that um We don't have the event that the colors are one and two Well, the probability that a vertex has color one is one over n probability the vertex has color two is one over n And so what we're looking at is one minus two over n squared So the edge can have colors one and two or two and one And then we we want to consider the complement of that event. So that's one minus and so what what S one two is is that's just the event of all of these things failing to happen And then there's if if all of these events Or if all the i's fail to happen Or in other words if the So this this means that the that a given edge does not get colors one and two If no edge gets colors one and two Then we have a bad situation and that that would mean that this is true What we would like to say is that the probability of intersection of events is approximately the probability product of the probabilities If that if they were equal they would be independent But as I said, there's a slight dependence here. We can't guarantee that there's independence of these events and the problem with What I would consider a traditional approach to attacking this problem is there's really too much dependency to apply the lobos local lemma So lobos local lemma When you apply it There's you have to have a certain amount of no dependency if things are too far away from each other in some sense Then there's no dependency, but there's there's a slight dependency everywhere here But the thing is it's not very big the dependency is not very much So lobos local lemma doesn't seem to work very well But there is an alternative and that's swens inequality Perhaps I should say swens inequalities swen had a version of this and And it was subsequently improved So swen the setup for swen is Remarkably similar to the setup for the local lemma So swens inequality has a super dependency digraph And I'm going to simplify things For our setting So Here's your goal. You want to show that these events are independent So you just say what what our goal is is the product of the probability of ii Of the complement of ii so that's we want to say that the probability of what we're looking for Is very close to the probability that we want it to be And swens says well that probability minus that is less than or equal to m times this expression So the expression that i'm coloring in yellow here If you think about this if the sum of those y's is small Then you have e to the small minus one which is small And so What we hope to get is that basically The probability that we're looking for is m plus m times small And then that will allow us to solve the problem um What happens to uh to swens inequality is it has this expression to it And since the events ii and ij are sort of well understood We're just coloring things uniformly at random. This is not a bad setup to use But we didn't use it And and mostly well, let me go into to what this means You know for what we're doing so So m is The product of these probabilities, which is less than or equal to e to the negative Of the sum of the probabilities, which is e to the minus mu So it's e to the minus mu And so what swens inequality if we get things to work It should say this that this probability is roughly e to the minus mu And um, unfortunately, it's too restrictive for us So what we're going to use is um yansen's version of swens inequality So if you find yourself Facing the situation where you think that lobas's local lemma is not working There's too much dependency at least between events, but the actual dependency between events isn't very big Maybe you want to use swen i'd uh I'd recommend yansen's paper in 1998 Because not only does he publish swens inequality he publishes a number of versions of them And the one we use is basically the one with the fewest amount of variables So there's there's an extra parameter that applies in all of his other theorems, but in this one You it's eliminated and We pay a little price We don't get quite what we want, but um, it's not a big price for the problem that we're interested in Okay, so here's the setup of yansen so We have a bunch of indicator random variables And we have a dependency graph if a and b are disjoint and there's no edge in a b Then the sets i i and i for i and i and i and b are independent S12 which is what we're looking for. It's the thing we want to compute We want to determine whether it's zero or not. Well, that's the sum of the indicators And that's just the number of edges that are colored with color one and two P i is just the expectation of the indicator as usual and mu is the expectation of this thing And that's the sum of the p i's D I don't know. You might call it a discriminant. It's um, it's a covariance And what it's doing is it's measuring the covariance of these things. And so you take your dependency digraph you Look at the dependent pairs and then you compute this expectation and you sum overall i's Divided by two doesn't matter and the conclusion that gives you is The probability of what we're looking for the probability that the sum of these indicators is zero is small And in particular It's minus the min of mu squared over 48 d or mu over four So what we would have liked is this to be exp of minus mu But as you can see we we suffer a little bit. We don't get we don't get mu even in this case We we get mu over four But that won't matter for us because we end up with a log term anyway. And so we don't need this So let's go into our setting. So our setting is well the p i is the event That an edge gets colored with colors one and two and that's two over n squared We add those up and that's just two over n squared times the number of edges And uh, then If we look at this so when are two events dependent? Well, um, we're measuring we're determining whether And one edge has color one and two and another edge has color one and two If those edges are vertex disjoint then they're not dependent on each other because They coloring these two vertices and these two vertices are different But if they share a vertex then there is some dependency and so what I want to say here is that This this is um an expression in terms of the graph And this expression is in terms of the The blue the thing underlined in blue is in the dependency digraph So in the dependency digraph we're interested if and in this case, it's not a digraph It's the dependency digraph is just an ordinary graph. So if we have two events that, um Intersect with each other that corresponds to two edges sharing the same vertex And so that's why we have a sum of degree choose two and then the probability that Two edges are both one two edges colored with one and two means that the central vertex gets a color either one or two And the other end vertices get the other color. So that's two over n cubed Right here and then that's easy to compute It's easy to compute an upper bound for Because the capital delta is just the maximum degree of the graph and remember that We needed that from the Erdos-Schumanovich theorem. So, um, in fact, we could probably prove this without that theorem, but giving away Instead of having the extremal number of edges having the extremal divided by two doesn't we don't get a Substantively better result anyway So it didn't matter to us and if we just plug it in literally plug it into Janssen's formula, we get this expression So um mu over four is m over two n squared And this complicated mu squared expression simplifies to this And we can further simplify the expression down at the bottom the n over 12 q squared n Because That's just the way it simplifies And if we plug in our value So let me go back here a second So we have n over 12 q squared n compared to m over two n squared If we plug in m, which is the extremal number for m What we get is that this term m over two n squared dominates and so we get This expression Now it turns out that if if we chose a k that was large enough Then the exponent is at most minus three log n and then the union bound holds the probability that Um, there exists a bad vertex is less than or equal to n choose two that there exists a one two vertex Or the one two edge So if there exists if there is no edge between points between two blobs Then that means that this is at most entries two times the probability that some blob doesn't contain an edge And then that that's less than or equal to e to the two log n minus three log n which goes to zero which ends the proof So the the story here is that Janssen Janssen's version of swen um Is really important for what we're doing we again we have some um, some dependency diagram But it turns out that there is some dependency between the events, but fortunately it's small enough for us to handle All right, so um, so we get this log term Uh in the upper bound. So if you remember we have We have f n h is at most Big O of What is it n to the uh Two over r minus one. Well, let me let me just Point you to To this expression So that means if k is too large then then it doesn't work then then we get a coloring So k has to be bounded by that expression in orange The log term as with most probabilistic methods is inevitable We we are always going to have a log to always Almost always have a log term when you apply the any probabilistic method like this So we were pleased um that we didn't think with um with a probabilistic method We could do any better And I think it's borne out in some of our other results where we can get rid of the log And in particular We can get rid of the log for c four so in general if um If the extremal number is Doing what we wanted to do Then we get um we get a log term difference in the upper and lower bounds But for c four we don't get a log term we get um a two A coefficient of two between the upper and lower bounds And the reason why is because we don't use a probabilistic method So the lower bound Comes from this typical way. We keep getting lower bounds We know that we have to have n shoes in the graph that we get we have to have n shoes two edges And that's bounded by the extremal number which we know for c four So that's the lower bound And the upper bound comes from uh a construction of a q to the three halves Two q to the one half graph That is c four free where q is the square of any prime power So technically speaking we have an infinite sequence of examples that produces the upper bound But what you can do is between examples you can just delete some vertices and and you can interpolate between those So we do need a q to be the square of a prime power or in other words a prime to an even power And here's the way we do it and how we do our construction. So For any prime power p the classical affine plane of order q has q square points and q square plus q lines And what you do is you just take a projected plane of order q and delete a line And what you get is the classical affine plane So what you're seeing here is a projective plane of order three Remember the fanno plane or recall the fanno plane is a projected plane of order two So a projective plane of order three you can think of as a hyper graph Where the sizes of the edges are four And so what you do is you take this thing and you can check that it's a projective plane of order three if you if you want to But if I delete that one edge This is what I get and and this is really nice now. I have a three by three grid of vertices Um, and I also have these notice that the edges sort of come in packs I have the horizontal edges. I have the vertical edges I have the edges with slope one and then so Yeah, the way this is drawn is a little confusing but This has slope one this has slope one and Let's see. Oh, sorry. I I mistrew it. This has slope one going all the way here This diagonal has slope one and then this triple has slope one And then we have slope negative one. So they all come together in bundles Um, all with the same slope and here I've colorfully drawn them So the graph, um That we're going to create gq is bipartite It's points and lines So this should be very familiar to anybody who's who's seen the projective bipartite Graph of the projected plane with points and lines is just the affine plane And so we have our points And we have our lines And point is jason to align in the bipartite graph if and only if the point is in the line in the In the affine plane Um, I should point out that what this does is if you take the projective plane bipartite graph And you delete a line and you delete the corresponding points This is what you this is the graph that you obtained But we can if you're giving the correct affine plane Sometimes easier to express it this way and I will I will take advantage of the fact that we are expressing it in terms strictly of the affine plane And the claim is that uh, this graph has no c4 Now that claim is easy if you believe what I told you that it was the Projective plane construction where you delete a line and all the incident points All right, so um So what I want to do here is yeah, I think I'm here. I'm establishing c4 free So, um, if we let h be a subgroup of order square root of q of the additive group And uh Additive group of this field and let a be the set of distinct representatives of cosets Then what we're going to do is we'll have this point The set of points P x and h so again the the points are x y's remember our our points were in a nice grid And so p x and h are all of the x's which are For which the y is a different coset Or sorry is a different Um, we take the representatives of the cosets and we take an element of the coset And we take all the y's that fit into this category and we bundle them together as points And the lines are just uh lines That are bundled according to the slope And it turns out that each uh subset of points and each subset of lines has size square root of q and what we're going to do is pair each Set of points with a set of lines to make a blob of size Two square root of q so if you're picturing this as a bipartite graph you have your points And maybe you have your lines And you have some bipartite graph in between And then you further, um subdivided the points And you further subdivided the lines Then you're going to just arbitrarily um Pair up a point with a lot of set of points with a set of lines Does it make much sense? Well, it turns out it solves a minor problem here, but um, but you can do it arbitrarily without much trouble So now that we have we have a c4 free graph And then We've done this partition of the of the affine plane of the points and the lines And then I have two claims here. One is that the graph is actually uh graph with the blobs that we want So it's a q cubed two square root of q graph and that's this sort of clear that that's um the The blobs are of size exactly two square root of q and therefore there are q cubed um blobs I guess right and then gq has an edge between each pair of blobs and um I won't get into why that's true, but that's um, it's basically the picture is showing you What what we've done actually is we've The it turns out that the blobs that we're taking for the The blobs that we're taking for the points are just the vertical lines So we take the vertical lines and that's going to be how we're going to bundle the points together and that's the um That's the p-axis and then the lm's are going to be those with the same slope And then we do the further subdivision according to this uh coset the cosets in the subgroups and um You know, I it looks very complicated. I think because of the notation, but the idea is is pretty simple and in fact Craig timmons After we posted this on archive made the observation that you can get this similar upper bound result for c6 and c10 um The constant here you pay a little bit, but that doesn't matter You you get rid of the log term for c6 and c10 And that's just because we know constructions of c6 free and c10 free graphs And they're very well defined and they're very algebraic. So we're able to understand those very well Um a corollary for this which interesting is that for our problem if you get something to be true for h You get it to be true for every super graph of h. And so in particular our result on c4 Corresponds to a result on k2t So uh the upper bound still holds the lower bound we keep losing ground But we only lose constant so So we're able to um, we have a long term difference if we know the toron exponent if it's um If it's a c4 or c6 or c10, we know enough about the extremal graph that we can get a better result and get rid of the log And then we have um question about trees. So if you remember I in way back in the um When I said what the main theorem was We excluded trees explicitly And we have better results on trees on trees. We have no log term So If a tree is on t edges, then we have the following bounds um The results are pretty straightforward. The lower bound is the same old lower bound. We've always done The upper bound comes from multicolor ramsey and that me and that comes from here And it turns out that the multicolor ramsey number for trees is nice and we can analyze that And if you notice our general bounds on trees differ by a multiplicative factor of four Moreover, um, they depend highly on The fact that we know the extremal number is at most l t minus one But the air to shosh conjecture suggests that in fact you can do better And that the extremal number is less than or equal to l times t minus one over two And if that were true, then we would improve the lower bound by a factor of two and we could get rid of that Um, I think the air to shosh conjecture there there's currently a claim that a proof exists And that claim is at least five years old Um, you're probably even older than that. So it's apparently a very complicated proof So if you believe the air to shosh conjecture is true, then we can get a slightly better bound here But in the special case of a star, we actually have the solution in almost all cases so, um We have that it's actually m minus one over t minus one and I made a slight mistake here Let me fix it The slight mistake is this is the ceiling of n minus one over t minus one Unless t is even and n minus one over t minus one is an even integer In that case, we were not able to uh to fix it. We We had a difference of one um, we Uh, I should say we we were able to uh if if t equals two Um, notice that that's even So t equals two means that you forbid um a k one two So forbidding a k one two is forbidding this graph As a sub graph, but all that means is that the maximum degree is one But you can do a perfect matching and the perfect matching has exactly Uh n minus one blob uh blobs of size n minus one Take a perfect matching Perfect matching with n times n minus one edges And then just two color the edges the edges differently So for every edge color one vertex one color Color i and the other one color j do that for all pairs i and j Um, and that should solve the problem. All right, so And so that gets blobs of size n minus one So So what's open? well, um There is still this log term and I would say that Um an open problem that's at least worth working on is getting rid of the log term in the case where uh fee isn't Where the exponent is known Again, you can't use probabilistic methods and the problem is we don't even know very much about the extremal values again Except in the case of c4 c6 and c10 um If we have a t edge tree again, we we have these um bounds um the lower bound depends on the extremal number so it can be improved with um air to shosh and um the upper bound depends on ramsey numbers of trees And then of course in the annoying case of the star on t edges then We we haven't quite resolved it, but We think that can be done and then of course you can you can consider other splits um Splits have been used for planarity and bar visibility Maybe you can apply another problem other than torontic problems and then of course you can ask about splits and other combinatorial structures in hypergraphs. They make some sense um, but Again, we know even less about hypergraphs than we know about and uh, that's all I have. Thank you Thanks, ryan. Let's have everybody thank our speaker Thank you. I mean I have other pictures of carlser. That's the math building and that's the schloss That is the the castle in kaosrow, which is the if you look at a map of kaosrow. It defines the whole city And that's looking away from the schloss at night And that is something I saw on the streets of the town It didn't strike me as a very rich town, but uh, a pink uh lamborghini is pretty impressive and uh, Yeah, I saw that in the super marked and I don't know What do I have else? Oh, yeah, uh dog parking And uh, Luft balloons That's just me thinking And that's in the math department building You're welcome Okay, why don't uh, why don't we open it up for any questions for our speaker? Of course, like there are these various strengthening of yonston Under various, you know, if you know something more about the events like they're negatively dependent or whatever And so yeah tracking my brain to What yeah, so so all of them involved another parameter with a lowercase delta It would not minimum degree, but something else and we just said why Calculate this other thing because we were always going to lose a long term anyway So everything just any sort of improvement got absorbed. But yeah, there's Uh, I think it's positive dependency, but I'm not sure maybe maybe it's negative dependency But there are a variety of these that are due to yonston and it's very nice I mean, there's many many times you run across a problem like this and you want to solve it using Say let's solve it using the local Emma the local Emma doesn't work because there's too much dependency, but in this case We could we could get control over the dependency when they were dependent. So So we took advantage of that and I've actually I've been looking to apply sweating quality for a long time But this is beautiful. Why why have we never used this or have I never used it before and You know, finally, this was a nice opportunity to do it and I think in general you just can't get rid of that long term because the extremal graphs are just such a black box Um, unless you could do something like Irish Chiminovich and say not only Is the max degree under control, but other things are under control in the sense that it's sort of regular in some sense Then maybe you can get rid of that long term But even then I think you're just reducing the exponent Is there a believed truth for the the fnt function when t is a treat? um Well, so I would say this is there hasn't been extensive study on this So let me say that uh, I believe that neither the upper nor lower bound is correct that asymptotically It should be n over t n over t or n minus 1 over t minus 1 plus little o of n For any fixed t. So I think I think so the obvious thing to say as well, um, the Air to shosh bound gets rid of the two on the left hand side And probably you get rid of the two on the right hand side But that that involves some knowing some ramsey numbers and Unfortunately, I think those are well known for trace Professor may know, uh, would it be helpful to do any kind of alteration there? So if we The probabilistic method if we don't guarantee every two blob has an edge, but then We can come back and fix it later Oh, um so Yeah, so I think the thinking that you have here is that You guarantee that many pairs of blobs have an edge between them, but not all of them Sacrifice the log and then what you do is then you try to Put them back in And then and then make sure without without creating a copy of H Yes, that would be the problem because you can't I wouldn't know how to put them back in I would try to do the other way around but yeah, I mean the The random partitioning is going to make some things very opaque for you And because the graph that we start with is something close to an extremal graph and then we manipulate it randomly We don't know what we started with so we can't really make a lot of good statements about what we end up with So that's We add don't even have to interact with the with the all right with the existing graph, right? We can just couldn't hold new Yeah, so so what you would your sacrifice would be as follows. So your your blobs are now of size Constant times whatever and then you add that amount So that would mean That you would have to make sure that you're adding a small number and you're Maybe adding a perfect matching or something So that you guarantee that you have no interaction Um Yeah, it has to depend on what what the itch is So to avoid it Well, not necessarily so you can add new vertices So, uh, I think what you're saying what what one approach of what you could be saying is suppose you wanted to create Blobs now of size k over 2 So but you're going to run into the problem where many blobs have no edge between them And what you do is augment every blob and then put edges as you need them, but you make them brand new edges Yeah Yeah, um I the only thing I would fear there is that you are Reducing it to the same problem Because in order to insert them they're going you're not going to be able to put You have to create these new blobs these new sub blobs And then they have to be h free And you have to put the edges in so that they're h free And where they're missing And that may be difficult because they those edges are going to have to interact in some way And so you're going to have to know how to put them in in such a careful way so that you can You can avoid creating an edge It's possible that that's a good that it's not something I thought of and that's You you might get a good result there But what I would be worried about is that you're you're creating The same problem and trying to resolve it too So I don't know okay, that's a great approach though. I like that Thank you Do you have any other questions for our speaker? Okay, in that case, thanks a lot Ryan and uh, thanks everybody for coming