 Okay, thanks everybody for making it out. Thank you to Hu Guo, our speaker today, who will be talking to us out of Georgia Tech about packing nearly optimal Ramsey R3T graphs. Go ahead and take us away. And that's the next screen. Yep, looks good. All right, so that's going in meditation on the introduction. Today, I'm going to tell you about contracting triangle-free graphs in our pseudo-random properties in dense graphs, where previous results only had such construction in complete graph. It is like in the intersection of random graph processes on the random therapy. But at the beginning, let me just approve the definition of random number in blue context. Random number RSP is the mean number N, such like for every right on the blue edge coloring of conveyance and work as graph KN, contains either a red KS or a blue KT. On understanding the behavior of random number, it's a major problem in combinatorics. And it is well-known to be very difficult, even to determine the asymptotics of it. On the step where it results in random theory, it's like the order of magnitude of the random number R3T is Q squared over log K. Where the upper bound is due to a 5 commotion as already in 1980, where the lower bound is due to K in 1995. Nowadays, we have a much better understanding of the random number R3T in the work of Oma and Kibash, as well as on the various recipes and theories. But today, we only focus on the order of magnitude. On the indeed, the group technique of the random number has been very influential. On the upper bound by a 5 commotion as already in 1980, it was considered to be the first theory of application of semi-random approach. Well, for the lower bound, Heritage, already in 1961, used a clever alteration method defined by random number R3T, roughly Q squared over log Q squared. As you can see, there is an actual log factor in the denominator. Later on, this result was reported by Spencer in 1977, using lowest local lemur. And in 1994, for the lowage, used an even simpler optimization approach to improve this result. Well, finally, in 1995, he managed to get the correct lower bound by removing the log factor until he convinced semi-random variation of channel free process as concentration in qualities on differential equations. And for King's major achievement, he also received a focus on price in 1997 for this result. And the topic of this talk is to introduce an extension of King's result, which implies a syntotics of some other random parameter. Again, by removing some log factor. Is there any question on this point? All right, then we will move on. Before introducing our result, let me briefly reveal some of the previous results. To find the lower bound of R3T, people have found the following. Erlisch in 1961, Spencer in 1977, and Greville-Lovic in 1994. All of them found that Enver has graph G in the complete graph KN. Besides, the graph G is channel free on this individual's number at most some constant C and square root of N as log N. And all of them construct such graph G in the binomial random graph GNP model. Where the binomial random graph GNP is the graph that you included each edge of the complete graph KN independently with probability P. And later on, their result was improved by King in 1995 and improved by Bowman in 2008. Both of them found that Enver has graph G in the complete graph KN, such that the graph G is still channel free with smaller input number compared to the previous result. As you can see, there is a log minimum improvement over here. And Kingman Bowman's result is nearly optimal. It means the input number is tied up to the constant C by the result of a tied motion from already. And this result also leads to a right order on magnitude of the random number R3T they discussed. And before introducing the proof idea of King and Bowman's result, then we mentioned that what such a result is difficult. So for example, if you want to prove such a result, by a standard approach, the alteration method in binomial random graph GMP, then you will try to remove one edge from each triangle to distract all the triangles and get a triangle free graph. But here are some facts. We know that with high probability, the number of edges in binomial random graph GMP is roughly N choose two times P. On the number of triangles, it's roughly N choose three times P to the cube. And because we want removing one edge from each triangle, the resulting graph does not have large uneven side. Therefore it's natural to assume that the number of triangles is much less than the number of edges. So that the resulting graph does not have large uneven side. But that will imply P is roughly one over square root N. But here is another fact. We know that with high probability, the maximum uneven side of binomial random graph GMP is roughly two times log N over P. And if you plot this P in, you will figure out that the unit's number is roughly this one, which is similar as the previous result, which is much larger than our targeting bond. So that's why previous proof based on binomial random graph GMP does not lead to the correct unit's number. So that's why Kimman and Bowman need to use some refund way to prove that they're resolved. On both of them use the idea of triangle three process. Where the idea of triangle three process is simple. We start with an empty graph. And at each side, we add one random page that does not create a triangle. And then we repeat. And here I will illustrate the process for you. So for example, at the beginning, we have not chosen any edges and all the edges are available to choose and we call them open. It means each of them can be added. On that each side, we add one random edge. For example, this one. Then next side, we add another one. At third side, we add another one. But at this moment, you can see that these two edge become closed because adding them will create a triangle. So we call them closed edges and they will never be added. And then we're coming to that. This edge on this one becomes closed. And that is the triangle three process on what's applied by Bowman in his 2008 book. Much earlier, Kim used a semi-random variation of triangle three process. Your semi-random means at each side, instead of adding just one random edge, we add many random edges at each side. For example, at one side, we add many random edges at each side. So that is the difference between Kim and Bowman's pool. Well, instead of finding just one nearly optimal or 3D graph as Kim and Bowman did, we find that almost a packing of nearly optimal or 3D graphs. It means given it keeps them greater than zero, you can find a collection of edge destroyant graphs, GI in the complete graph KN. But like each graph GI is nearly optimal or 3D graph. It means GI is triangle three on this small unit number. And secondly, the union of the graph GI in this collection contains at least a one minus epsilon fraction of edges of the complete graph KN. And that is our unpacking result. On the idea to prove our unpacking result is simple. Actually, we start with H zero equal to the complete graph KN. And then we find the graph GI in this collection in HI by semi-random variation of triangle three process. And then we remove edges of GI from HI to LH at plus one, and then we repeat. Therefore, we can obtain our packing result by a simple polynomial time random math algorithm, not just showing a distance. Is there any question about this unpacking result? Okay, then we will continue. On what is the motivation? Why should we care about such packing result? Well, first of all, packing objects in random discrete structure has been a recent trend. On yearly, you will have fixed objects, such as triangles, uniting cycles, or matching. While we pack objects with similar properties, then these are triangle three on the small unit number. Therefore, our result is a natural packing extension of Kim's result. Secondly, if you are not into the unpacking problem, but in random graph processes, then you know that even in one run on the triangle three process or the semi-random variation of triangle three process, controlling the arrows is very difficult. Well, we need to run the process for many iterations. Therefore, very important to control arrows. Otherwise, the arrow term will pick over. For this purpose, we actually introduce some self-structuralization mechanism to control arrows, which we will discuss later. And thirdly, if we are not interested in random graph processes, but in random theory, then similar as Kim's result, remove the actual load factor in the load bound of random number R3K. Our result also pays damages from conjecture in random theory by folks and others. Which I will discuss in next slide. Is there any question at this point? Okay, let me move on to discuss such random conjecture. So here we consider the archival version of random theory and we consider a copy of some fixed graph H. We call a graph G is R-random for H. And you know by this notation, G arrow H, it will any R-coloring on the edge of G, it contains a monochromatic copy of H. And we use this notation MRH in all the collection of our R-random emograph G or H. Where R-random emote H means the graph G itself arrows H. But for any proper subgraph G prime of G, it has no arrow H. And many problems in random theory can be understood by studying the property of this collection MRH. For example, randomly theorem tells us that this collection is not empty. And if you take the graph H to be the key for next complete graph, and if you take the memo over the number of vertices in this collection, then you will recover the classic random number. And if you take the minimum over the number of edges, then you will recover the value studied by the random number. Well, we are interested in some other extremal parameter and that was introduced by Burrow, Erdisch and Lowell's in 1976. Here, we take the smallest minimum degree of the graph G in this collection undenoted by this notation SRH. And in the two color case, when H is a k-word aspect, then Burrow, Erdisch and Lowell's completely determine this parameter in 1976. For many, by tartar graph H, this parameter is determined by books and me as well as Zabel, Winston and Zetcher. In 2015, books, Grispoon, Derbino, Persian and Zabel, they initially study of the multiple color case. On the H is a k-word aspect, we determine SRKK up to some logarithmic factor. On here for k-fix and r-tending to infinity, really determine SRKK up to some logarithmic factor over here. Well, with k equals three, in the special k equals three case, they also make some effort to make their bound as good as possible and for the logarithmic factor, there is a difference in the logarithmic factor in the upper bound and the lower bound in their pool. On the conjecture, their lower bound is correct, but the conjecture like SRK three is big O, R square, and log R. Well, we pull this conjecture as a corollary of aberrant pecking result. So it implies that when we confirm this conjecture, it implies that SRK three has the order of magnitude of R square times the log R. Let me mention that they have a net number to convert pecking result, such as hours to a bound on SRK three. On in their paper, they actually, they actually peck GI will go as local number argument. So that's why there is an actual log factor in their upper bound. And to remove the logarithmic factor, this check is to pipe GI sequentially with a triangle three process. Well, for some technical reasons, we use semi-random variation of triangle three process to get our pecking result. So is that a question at this point? Yeah, this upper bound is, that's a construction, right? Yeah, for the upper bound, you need to get a conjecture. Yeah, you want to show there is some work. So let's move on to our main technical result to prove our pecking result. So here is our main technical result to find the triangle three graph G in this graph H. So the result states that if you assume row is some edge density, and ice is the size of some large vertex size. Then if your host graph H is multi-sparse, it means the number of edges between any strong vertex at A and B offsets ice. The number of edges between A and B in H is at least, if so, and size of A and size of B. Then we can find the triangle three graph G as a soft graph of H. So that the number of edges between A and B in G is roughly the edge density row times the number of edges between A and B in H. And that's true for all these strong vertex at A and B offsets ice. So first of all, you can see that this number is greater than zero. Therefore, such graph G that's now has even side outside at most of two ice. Therefore, such a graph G is a nearly optimal R3P graph. This formula also tells us that graph G behaves random like because the number of edges between A and B in G is concentrated around the edge density row times the number of edges between A and B in H. And this property we call semi-random property. And actually this semi-random property, sorry, this is the random property. Actually this one really we call it random property. And actually this random property guarantees that we can, after taking the graph G from H, then a similar fraction over all the local parts of edges remain. So that will imply our packing result. So here is why. Because we start with H zero, you go to the combative graph KN, and then we sequentially choose graph GI in HI by using our learning technique result where HI face a row of H on GI face a row of G. And then we remove edges of GI from HI got HI plus one on repeat. Then this pseudo-random property guarantees like the number of edges between A and B in HI is roughly this parameter times the number of edges between A and B in H zero, which is size of A times size of B. Therefore, we can indirectly apply our non-packing result as long as this parameter is greater than usual. And then we will stop with the number of edges between A and B in HI. It's roughly usual fraction. And then a double-county argument implies that they have covered at least one minus usual fraction of edges by the graph GI that we have taken out. So that's how our learning technique result implies our packing result. And let's discuss the proof of our learning technique result is based on semi-random variation of channel-free process. So that we do not require degree or code-degree regularity of the host graph H. Therefore, the graph H can just be a dense graph and not necessarily a complete graph. And let's discuss because we need to control the arrows over many iterations of the semi-random variation of channel-free process. Therefore, it's important to control arrows. And for this purpose, we introduce some self-stabilization mechanism built into the process to control arrows. And some of the probabilistic tools we used are boundary differences in qualities and some upper-tier in quality of one key. So next, I will tell you how actually our learning technique result works and how do we prove this learning technique result. This may be an excellent point for questions. Or is there any question about this one technique result? So maybe you just, you can tell us a little bit about what... So what is being stabilized? Is it the degree sequence or what exactly are you controlling? Okay, yeah. I will discuss the self-stabilization in next slide. So actually, we just include more edges. Yeah, I will discuss it a couple minutes later. Okay, thanks. Okay, so let's move on to tell you how actually we prove our learning technique result. So we call that to prove our learning technique result. We try to find a pseudo-random channel-free graph by semi-random variation of channel-free process in a host graph H. Therefore, to construct this channel-free graph with what we call Tj for many steps on the semi-random variation of channel-free process. So we keep track in curriculum of three kind of edge side. So the first kind of edge side is Ej, which you consume that is the random side of edges. On the second kind of edge side is Tj, which is a sub-side of the random edge side Ej. On Tj, it is channel-free, has approximately maybe equal size as the size of Ej. On the third kind of edge side we need to track is the open edge side. But for open edge side, because edge is open, if it is now in the random edge side Ej, and it has now created a triangle with any two edges of Ej. So as we have seen, if we have a triangle on two of the edges are in the edge side Ej, then the third one we call it closed and it cannot be in the open edge side Oj. So these are the three kind of edge side. We want to track and the idea to track them in each side of the semi-random variation, triangle-free process, is that first we generate some random edge side, what we call gamma j plus one in the open edge side Oj. And then we do some alteration of this random edge side gamma. We get a sub-side of gamma prime, so like the union of the triangle-free side Pj with this gamma prime is still triangle-free. And certainly we will update the open edge side Oj, which is the sub-side that we remove this random edge side gamma from the open edge side Oj. So that is the idea of each side. And then I will give you the definition of these three kind of auxiliary edge side. So indeed, we start with O zero equal to the edge side of the host graph H, which is a dense graph. On this side E zero, on the T zero equal to empty side, empty side at the beginning. Then as you can see, these three kind of side satisfy the variation over here. So that is the initial beginning of the algorithm. And then we need to define, and then we need to control them in that way. So first, we have the idea here. So first we need to get some random edge side. So we get this random edge side gamma as the P random sub-side of the open edge side Oj. So we just include each edge of Oj internally with probability P through this random edge side gamma j plus one. And to construct the random edge side Ej plus one, which has a P union of Ej with this random edge side. We have chosen. Therefore we have updated the random edge side Ej plus one. So at this moment, we have finished the first step over here. Next, we call that we need to do some alteration on the edge side gamma j plus one. We get a sub-side gamma prime. And we want to ensure that the triangle three side Ej plus one comes back in this way as the similar size as the random edge side Ej plus one. Therefore we need to ensure that the size of the sub-side gamma prime is roughly same as the size of the random edge side gamma. And how do we ensure that? That because we can choose this P small enough so that you can imagine the random edge side gamma j plus one is really small. And means very few new triangles are created in the union of this gamma with Ej. Therefore it's enough for us to remove very few edges to destroy all new triangles. Therefore if we take the union of the sub-side with the triangle three side Pj, the graph is still triangle three. So that is our idea for us to ensure that our process may equal. So let me point out what kind of new triangles can be created in this type. So there are two cases. So the first case of creating a new triangle is that if we have a triangle and one of the edge is in the random edge side Ej and the other two edges are in the open edge side Oj. On this side, if we include both of them into the random edge side gamma j plus one, then a new triangle will be created. And we call such pair of edges in the random edge side, bad pairs. And here is another case. If we have a triangle and all three edges of it are open and we include them all together into the random edge side gamma j plus one to create a new triangle. So in this case, a new triangle is created and we call such three edges bad triples. Therefore to destroy all the new triangles after a moment solved, if we remove the edges of a maximum edge of this joint collection of bad pairs and bad triples, then we can destroy all the new triangles moving such a maximum collection. And that's how we define the subset gamma prime. We just remove the maximum collection of edges from the random edge side gamma. And then if we take a unit of this gamma prime with the triangle free side pj, then we will get a triangle free side pj plus one. Therefore we have updated the triangle free side pj plus one. That is by the maximality of this collection we remove. And the reason that we remove a maximum collection of edges over here rather than remove one edge from each bad pairs and bad triples is that it's actually much easier to analyze because of the disjoint maze. So that's why we remove a maximum collection rather than remove each edge from the bad triples of bad pairs. At this moment, we have updated the triangle free side pj plus one and finish the second side. On the last side is to update the open edge side pj plus one. I will show you how that works. So we call that we want pj plus one. That's not intersect with pj plus one. Therefore we need to remove the randomized side gamma j plus one from pj. And because we want the edges in the open edge side open, that means we need to remove some closed edges. So for the closed edges in this side, and there are two cases for the first case, is that if we have a triangle and one of the edge is in ej and the other two edges are in oj. And if we include one of them into the randomized side gamma j plus one, then the third edge will become closed. Therefore we need to remove these closed edges from oj to get oj plus one. And another case is that if we have three edges for a triangle in the open edge side oj, and we include two of them into the randomized side gamma j plus one, then the third one will become closed. I mean, you can remove these closed edges from the open edge side oj. So the first of two parts are easier to understand. And actually, we also remove some actual edges for technical reasons, and I will explain it to you. And actually, this part of edges is related to the sales position mechanism. Let me explain to you why we need to remove such actual edges for technical reasons. So actually, we remove some actual random edges and here is the reason why. So for example, if you have an h e, which is u v, and if you consider the co-pair of edges on it, and one of the pair of edges is in the edge side e j, on the other edge, it's in the edge side open edge side oj. And as you can see, if we add any of the open edge side over here, we will close this edge u v. And we denote the number of such co-pairs by y e. So such co-pair mean, I mean, one pair is in e j and the other pair is in o j. And therefore, the probability is like an edge e is not closed. Next step is roughly none of these open edges is included in the random edge side gamma j plus one. Therefore, it is roughly one minus p for the power of number of such co-pairs. And let me mention that we have a process mainly equal here. It is because there is another case that we have a co-pair and two of them are open. And we include both of them into the random edge side gamma. But as you can imagine, in that case, the probability is very small. So that is rather negligible. So therefore, we have the probability that e is not closed. It's roughly this probability. But as you can see, this probability really depends on the edge e, depends on this parameter. On this parameter, y e can actually vary edges from edges because we do not assume regularity of the host graph. And that will cause some problem. But to overcome this difficulty, we actually remove some actual random edges independently with a probability qe. So we remove the edges independently with probability qe. And therefore, an edge is not closed, or an actual edge is roughly this probability times one minus qe. And because we can choose this parameter, probability p, qe, up to different edges, e, therefore, we can make this probability is roughly the same for all the edges, e. And that is what we call self-deposition mechanism. That is also a key difference between ours proof and Kim's proof. Well, in Kim's proof, to get this self-deposition, Kim used some earlier regulation result by Khan to artificially modify the graph to make it self-deposition. Well, here we just include some actual random edges which is much easier to implement in algorithms. So here is what we call self-deposition mechanism. And at this moment, we have updated the open edge side which is plus one. That is, we remove the random edge side and the closed edge side and some actual random edge side from the open edge side in last step. And therefore, at this moment, we have updated all of these three kinds of observatory edge side, and then we can move on to next step. So that's how the semi-random variation of triangle-free process works in the proof of our technical result. And by analyzing such semi-random variation of triangle-free process, we can show that the number of edges between any two large vertex sides, A and B, the number of edges between them in the open edge side, is roughly a parameter qj times size of A times size of B. Where qj setifies some differential equation. And here we assume the host graph H is the conveyed graph for ease of illustration. So that's why we have size of A times size of B over here. Otherwise, if you just assume H is a dense graph, then we should have the number of edges between A and B in H. And I will tell you how this result imply our main technical result. Because I do get our main technical result, the one, the number of edges between A and B in the final triangle-free side pj, is roughly the estimate show times size of A times size of B. And let me tell you how does it work. So here we choose the probability p, would be some small parameter sigma over square root of n. Where p is the probability that H in the open edge side is included into the random edge side gamma. And let me tell you how this result will imply our main technical result. So actually consider the number of edges between A and B in the final triangle-free side. We can decompose such edge side to the side that they are included in the triangle-free side pj. Therefore, the number of edge side A and B in the final triangle-free side is the sum of the side that they are included into the triangle-free side. So that's why the first equality holds here. Recall that we want the difference between the triangle-free side is roughly the same as the size of the random edge side gamma j plus one. So that's why we have our main equal over here. Because we need to keep gamma prime is roughly same as the size of gamma. And because this gamma random edge side is just a p random edge side of the open edge side oj, therefore, the size of this term is roughly the probability p times the number of edges between A and B in the open edge side. And if we plug p in, if we use this concentration and plug it in, we will have this side. Because qj satisfies some differential equation, therefore, this discrete sum is approximately equal to this interval. Therefore, we can approximate this discrete sum by this interval. And then if we apply the fundamental theorem of calculus, this interval is roughly equal to this function over square root of n times size of b times size of b. And by studying the behavior of this function, we can show that this function is approximately equal to square root of beta times the number of edges. And because of this parameter, it's what we define for the edge density row. Therefore, we have shown that the triangle free side pj, the number of edges between A and B in the triangle free side pj is roughly edge density roughly row, according to the whole square page. So recall that if we use whole square graph page, not to be a complete graph or some dense graph, then here instead of size of A times size of B, we will get the number of edges between A and B image. So that's how we prove our Mentecner result. Let me mention that actually things are not so quite easy to get this concentration result. And I will explain to you why. It's because we try to get such concentration for all the large vertex at A and B. And that will cause some difficulty. And here is why. So as soon as we have a vertex v1, and it has many edges, many neighbors use the randomized side ej. So they have neighbors over here. On v1, I have some open neighbor, that is what we call v2. So v1 and v2 is an open edge. And as you can imagine, there are many common open neighbor of v2 with the neighbor of v1 that is in the randomized side ej. So you can assume there are many common neighbors over here. And then we can choose the large vertex at A to be a vertex side that contains the neighbor of v1, ej. So we choose this one to be A, and then we choose some vertex at B, which contains the vertex at v2. Then for this pair of large vertex side, if at next step, we include this open edge v1, v2, we include this open edge v1, v2 into the randomized side gamma j plus one. Then as you can see, there are such common neighbor will become closed. And as you can see, a large number of open edges between A and B are closed. So that will cause some difficulty for us to apply concentration in qualities because of this large change at this step. And we want to, we want the concentration close for all the large vertex at A and B. And that will cause some difficulty and need some more careful analyze. So I may not go into the details about these technical details. Rather than I will go through summary. So as a summary, instead of finding just one nearly optimal R3T graph as key on Bowman data, we found out all of the packing of nearly optimal R3T graphs, it means give any eaves on greater than there. We can nearly decompose the complete graph Kn by a collection of edge disjoint graph GI, such that the graph GI in this collection contains at least a one-month eaves on fraction of edges of the complete graph Kn. And each of the graph GI in this collection is channel-free and with small universe number. So as a remark, our result is a natural packing extension of Kim's result. And we can obtain our packing result with type of a beta by a polynomial time random math of the result, not just showing the existence. Secondly, our packing result establishes the conjecture about this parameter SRK3 in random theory by folks and others. So let me start with two questions. So for the first one, actually to prove this upper bound on this random parameter SRK3, it is enough to just cover eaves on fraction of edges. Well, we cover one-month eaves on fraction of edges by our packing result. So we are wondering is there any application about our stronger packing result? So maybe in random theory. On the second question, if one generalize the greedy packing result, then it turns out that it does not help to improve the upper bound on SRKK for larger K. It is because there are better construction based on projected presence. Therefore, we are wondering, is there any application of the random greedy version of KK3 process will be useful? And this talk is based on a paper by me and Wonky, which has the same title as this talk. And it has recently been published at Commentarica. Let me stop here. Thank you. Is there any question? Thank you, Vah. Let's go ahead and thank our speaker if we could. And then we'll go ahead and open it up for any questions. Okay, do we have any questions for our speaker? You may have mentioned this. But so with this algorithm, when do you stop? Or when, I guess, are you forced to stop at some point, right? Because you would have put too many edges in. I don't know if that is a, I don't know if that question makes sense, but. Actually, there are two level of integration here. This is the higher level of integration. We interactively use our mental result because of graph GI in the packing collection. So because each time we remove so many fraction of edges, then the remaining fraction of edges is roughly this one. I know that if you want to apply this mental result, you need to guarantee that there are still enough number of edges left. So that's why we need to keep this parameter as at least the usual. And then we will stop when this parameter is roughly. Okay, that makes sense. Thank you. Actually, follow up question of that. So how dense is the graph that's left over? It was absolutely basically. So for the graph left over, actually we still have some fraction of edges that's left over. Actually you can use this formula to guarantee that in the leftover graph in HI, this parameter is roughly Epsom. So roughly Epsom fraction of edges are left over. Yeah, well, what is epsilon, I guess, is what I'm asking? How sparse is the remaining piece? Yeah, for the Epsom, you can choose the Epsom to be any constant greater than zero. But you could also take attending to zero. How fast? You don't know, maybe? Yeah, but then you see maybe change. That is the unit number, yeah, real change. See, actually it depends on something like one word, Epsom square or something. I guess another question actually that, of course, the K4 is, be nice to have the Ramsey result for K4 as well, because that's not tight still, I recall. And you don't need the packing result for that, right? We just need the instructions. So have you tried your methods on K4? Yeah, actually we have some work in progress, not just for K4, but for H3 graphs. H can be many kind of graphs. So we're really very similar at this one, it powers one method from fractional badges on S from unit number. Do we have any other questions for our speaker? Okay, in that case, thanks Huff for coming around. It was a great talk, good to have you. We're all happy to have you here. And I guess we'll go ahead and sign off unless Josh, is there something else you wanna say? Nope, thanks very much. So actually for H3 result, we can guide H2B, maybe take a cycle, that part of graph. Thanks for attending. Thanks. Thanks everybody. That was a very nice talk, it's a very nice technique. So thank you. All right, thanks.