 Okay. Well, thanks everybody for making it out and thanks to John for agreeing to speak. So today we've got John Weirman from Johns Hopkins University who will be talking about improved balance for the site threshold percolation of the hexagonal lattice. Take us away, John. Great. Thanks very much. Thanks for the invitation. I'm glad to try giving my first zoom seminar zoom talk. So, and I guess I'll start just with the, the, the stupid discrete math joke that I know, one of my colleagues is a graph theorist, and he has various times said that there are three kinds of discrete mathematicians those who can count and those who can't. So, that being out, I'm going to talk about improved bounds for site percolation threshold of square lattice. I'm going to the first half of the talk is going to be explaining what the problem is. And the second half is going to be discussing some results and the third half is going to be the explanation of the ideas behind it. So, I can't count either. This work is supported by a fund we have in the department that's Ashton Duncan fund for research and probability and statistics, which is very useful in giving me travel money and money for computers which will come up later on in the talk, I guess. So, what is site percolation. We're going to start with an infinite graph might as well considered infinite it's usually considered in physics to be like an atomic lattice with some structure but we're interested in large scale phenomenon so we might as well consider the infinite infinite graph. And we usually all examples I'll be using our periodic translation and variant in some sense like the hexagonal lattice is the one we've got to focus on. We select each each vertex is retained with probability P and deleted otherwise and we look at the sub graph that's induced by the, the vertices that are selected. Now usually this is thought in terms of whether they allow fluid or electrons or disease or whatever to go through and they're called open or closed. I'll probably use that mostly as a terminology open, but it's really the retained vertices and the others are deleted and we just look at the subject. And we're interested in connectivity properties of this random graph g sub p. And particularly whether there's an infinite connected component which I usually, since I've been for 30 some years or more, calling it a cluster because I intermingle with physicists about where most of the application should physics and engineering. So that's the basic idea of the model. Here's some example configuration is a complete specification of every vertex is whether it's open or closed and the red ones here are open. And then the induced sub graph. Hopefully I've got it right here is is the edges are included if both both vertices are are open the edges open. And we can see this is with this is a real simulation with 10 sided die that I did. And if he is point six, then we get these separate clusters relatively small some larger, and you can imagine that that would go through the entire lattice. And if P equals point eight, then it looks like we're got some structure that could just be an infinite, you know, structure that reaches out permeates the whole lattice. This could be like the difference between a sense of we're in pandemic, a disease just spreading locally or and being naturally quarantined or being a pandemic. Now these configurations are coupled in what I did was I rolled the die and if I got a one through six, I made the edge for an edge and I made it open. If I got one through eight on the same role, then I made it open here. So every edge here is, if it's open here it's open here also but some of these are open so things have a higher probability of being connected here than if they are if you take any subset of the vertices there's a higher probability of being connected here than there. And that's the idea of a coupling is technically to probability measures where they have this specified marginal distributions but the random element association associated with one is always So we're looking at a more richly connected thing here and coupling is equivalent to stochastic ordering which I'll explain and my method of finding the bounds depends on on that equivalence and calculating when one measure is stochastically larger than another. So, that's what's behind this in some sense except I will not be looking at every connection but just connections on the boundary of a sub graph, which makes it reduces computations and is still valid. So site percolation threshold is is the key quantity denoted by P sub C originally was called critical probability but so many different probabilities and many applications are considered critical that's become known as percolation threshold. And it's a value and we have the parameter space the interval 01 and there's some value in there which divides the regime, the region of the parameter space where there's all components are finite to one of where there's an infinite component. And in the cases that we're looking at although we don't care in this particular here that for all these two dimensional examples. If your FP is equal to PC then there's no infinite cluster. And there's the question though is there's either a set of small clusters, or there's a spanning cluster in some sense it permeates a lattice, and these simulations illustrate that. Well we would expect for the hexagonal lattice site model for that the percolation threshold is somewhere between point six and point eight. Where we start to get as we increase P when when do we start getting infinite components. And the bond percolation model which is much more well studied and more results are gotten there. There's more structure there in some sense. It's just the, each edges is randomly open or closed independently. And if you're familiar with Erdos-Renyi random graphs, you just take a different underlying graph you take the complete graph and do this and that's the Erdos-Renyi random graph so it's completely parallel. And the bond percolation threshold is defined similarly. Here we've got the square lattice. I did the same type of coupled simulation and from this it's kind of apparent that the, you would expect to be the value somewhere between point four and point six for per its percolation threshold bond percolation threshold. As I mentioned, there are applications about, you know, flow and fluids of fluids in course medium freezing or melting are there is their structure frozen into the whole read lattice or are there small clusters that could move relative to each other and it's like liquid, or you know, spread of disease in a population as I mentioned and so on. There are hundreds of papers now, each year in the mathematical literature and thousands if you look at physics and engineering as well. So, my business for the last 40 years or so the grand challenge is to understand what determines the value of the percolation threshold and can you find exact values that would be the gold standard. It's highly dependent on the structure of the lattice graph and very few are solved. Exactly. You don't really know the nature of the dependence. I'll show you a few that are solved. There are rigorous bounds for some for site percolation models, the best ones of the few of their few that are solved, which really just come from bond model transformations. And then the others, it's, it's an interval of about point one is about the best that we've, we've had, I improve that now for the hexagonalize. And there's almost nothing known rigorously about three dimensional ones you can get bounds like it's bigger than point two and less than point six, which is not very helpful. For site percolation. This is the range of typical range of bounds for unsolved ones, and the physicists do all kinds of simulations and claim five or six digit accuracy, but the error bars are so bad. I mean so so small that that you can look at four different studies and there's no overlap. So they would all say that there's something wrong with the other studies. So I tend to look at a consensus system. How do the, how far did they, how many digits do they agree on for all the studies that I can find in the last 10 years and call that the consensus value. So, and the most comments studied. You may have heard of the Archimedean lattices. That's the set of lattices 11 lattices that that are vertex transitive timings of the plane by regular polygons. And there's some talks I elaborate more on this that you can look in Grunbaum and Shepherd's chapter two this to see these. If you take regular polygons are only certain number of combinations of the angles that add up to 360 degrees to surround the vertex and so you can tile the plane, but then only 11 of those combinations can be extended to a vertex transitive tiling of the entire plane. And I won't show you pictures of all of them but the three most common are the square lattice triangle lattice and hexagonal lattice and the exact bound thresholds are known for those and a couple others. And that's pretty much it. And some physicists well actually a discrete mathematician john ism and a physicist, conjectured values for these back in 1964 but it took a long time to prove any of them. The first exact solution was by Harry Keston for the square lattice this is the notation, because for the Archimedean lattices. So let's look at a vertex and see what are the sizes of the regular polygons around so instead of calling it the 4444 lattice it's four to the fourth. And so four to the fourth three to the sixth and six cubed lattice is the hexagonal. I was able to use some additional transformations that I knew about that and Keston's work to extend the result this is due to self duality of the square lattice. And this the triangular and hexagonal lattice is our self dual. The key property that with certain regulatory conditions that you have to self dual graphs their bond percolation thresholds add up to one. And using something that, depending on what field you're in. I learned it as being called the star triangle transformation and electrical engineers call it the wide delta transformation. You can actually get a cubic polynomial that the root of it is, you can solve and get the exact values, two times a sine of pi over 18 which you could not have guessed, like you could guess the one half for the square lattice that that was the result that should have gotten tenure but I didn't and I left that university but I got to a better one at Hopkins. I had a lesson also about how universities work, I guess. And then there, there was a conjecture in 1982 by a famous business that's had like over 500 papers. And he conjectured it, there was actually a formula but conjectured exact values for the Kagame lattice, and which isn't an Archimedes lattice three six three six, and the three 12 squared lattice. The Japanese wallpaper names reading reading this is the Kagame lattice and this is the extended Kagame lattice. But they conjectured exact values and I with some students, and then again in one case in further improvement, we're able to get for bonds thresholds very accurate results using my substitution method that I'll tell you about here. And where for the Kagame lattice we've actually got the first two digits to agree for the upper and lower bounds, and for the extended Kagame the first three digits degree. So that that's part of the reason I'm showing this is to illustrate that bond percolation models are able to get much better balance. Other other lattices other Archimedes lattices the bounds are on the order of like point zero three to point zero seven or something much better than for site models. Okay, so here's the results about the hexagonal lattice site model is the consensus estimate is point 6970. And the best lower bound previously is there's a result that says for any lattice, the site percolation threshold is greater than or equal to the bond percolation threshold. So, and since my solution for the bond percolation threshold in 1981. Nobody has been able to improve that lower bound. And then one of the grand challenges because of that it's been so long and everything everybody's tried has not been able to prove it. But, and then with one of my PhD students 2007 still quite a while ago was is stood for 13 years now as the best upper bound. And so those are just a little bit less than point one apart, which is actually good for site models because this is one of the premier models one of the most studied models. The square lattice and hexagonal as you know there's a lot of interest in those are the most natural ones to want to know about, but there's been very little progress on those and so those are my grand challenges. And my new results are that I've improved the lower bound to point 656 something not too much there but you know, after 39 years at least you know finally got something. I've improved the upper bound. Now, the comment here about anticipated results is. I've got a new computer on order through this Duncan fund, which is very nice to have. And I just need, I've run up against constraints of memory up until now it was speed and I've solved a lot of the speed problems we've made improvements in the algorithms for calculators for the speed but memory is what the constraint is, and I've done some calculations with something that doesn't give a valid bound, but suggests that when I get the new computer with more memory, I can do one that will do even better than that. And it shows that the results should be improveable to at least this for the lower bound and that for the upper bound. And so I'm, and then I think it will actually give me something on the order of point 67 and point 73, which in that case would reduce the length of the interval between the bounds by about 35%. Which is much bigger than most improvements when when the bounds are improved in these types of problems. So I'm looking forward to getting that hopefully in a week or two. Okay, so I call it substitution method. So what I do is compare an unsolved percolation model, so the square lattice site percolation model to a solved model at all call the reference model. And we use stochastic ordering to do this stochastic ordering of probability measures, where if we compare the connectivity between the vertices on the boundary of region so we get, we can decompose we want to decompose the vertices into sub graphs, where there's each, the each lattice can be made up as a disjoint union vertex district union of all the copies of the sub graph it's of its substitution region. And so here's this, the unsolved lattice, and here's the reference lattice now this is called the martini lattice. It's drawn it in a way that's not quite the martini but if you look at this, if these actually extended on up straight this would be the stem of the martini glass, and then the sides and this would be the liquid. So the person who invented this lattice, it's contrived but it's really nice because it's solvable. So that gives an it's solvable in a neat way that gives more possibilities. So, this is the reference lattice, and you know the martini last can be constructed of this joint vertex district unions of these, and the hexagonal lattices and the disjoint unions of these, where these are those vertices is for site models we want independence of the different regions to castically so the randomness is within the vertices so we have to subdivide the edges that go out so these are vertices we've inserted to subdivide the edge up to the next vertex in the lattice in every case. And the thing about the martini lattice is that it's a two parameter model that can be solved. You, you can make it a one parameter model but you look at two parameters. So here's, here's a bigger region of the lattice, but since in if we have a single parameter P for the probability of each vertex being open, then we're looking for the point that divides the region from the only finite clusters to from the region where there's only infinite clusters, when there's when there exists infinite clusters. But if you're in a higher dimensional parameter space, then there is a region below a critical surface where all clusters are finite above it where there can exist where there exist infinite clusters with positive probability. And so, in this case, we can look at the vertices that are at the stars center the stars as being open with probability s and the vertices in the triangle with probability T. And there's a polynomial equation that says, you know, this is the critical surface. And that'd be a whole hour talk to explain why this is true. But this is a solvable solved model. And you can actually solve for s in this which is even nicer for doing exact calculations symbolic calculations in that lab. This is one over three p squared minus t cubed. And so that's what I've graphed there so this is the critical surface. This is the region where they're not infinite clusters this is where there is percolation where infinite clusters occur. And so instead of having one reference model I've got an infinite collection of reference models for different values of T. And I can optimize over that. So they're invalid solution I said that the you have to have the substitution region has to you have to have vertex disjoint union of the substitution region giving the whole lattice. If you had the corresponding thing for the hexagonal lattice here. You could fit because you could turn it upside down to to to put it here and here and build the whole lattice with it. But because there's this asymmetry here, the triangles and the stars, you can't do that with the martini lattice. So this does not tile the whole lattice. You can do calculations for it like I would do for any other substitution region but it does not. There's, I can't prove that they're valid bounds. If they were valid those values that I mentioned, you know, would be would be right. But I just did a couple of examples just to see what I would get to see if it would be better than what I've gotten using the other region that I have valid bounds from. And I didn't optimize over the whole critical surface which would give me better values in each case, but they're not valid bounds. On the other hand, when I get the new computer and can run it. I'm confident that there will be plenty of memory to do this region which would be for we call these tiles if it's a hexagonal lattice, then these are hexagonal tiles and this would be a for tile region. And it does, you know, fill the whole plane, and this would give valid bounds, and since it contains the other one. All the experiences that it would give better bounds than that invalid one that only had three times. So that's that's why I claim that I should get something like 0.67 and 0.73. When I get to use the point of using this this. So that's coming attractions. Okay. So, let's get start describing so those are the results and, and what I'm expecting to get now the third half I guess, so how are you with the time. That's that's good. Maybe a bit more than halfway through in time, I guess. But we look at these substitution regions, and we, we have, if you're looking at them in the main lattice the original lattice, then we have subdivided the edges going out. And put in these ghost vertices and those are what we'll call the boundary vertices. And we have to choose regions so that they're the same number of boundary vertices on the two regions because we're great basically on to say, we know that some particular values of S and T, we know what the connectivity probabilities here we get a fixed probability major here. Over here we've got the P that can vary. And if we run P up high enough that it is that we can show that the connectivity between the boundary vertices is greater here than over here. Then we know, since this one was at criticality. This one is super critical and we know we have infinite clusters. And if we run P down far enough. And if it's dominated by this probability measure showing the connectivity here, then we'll know we have a lower bound. But we do have to have the same number of boundary vertices, because we're basically saying that we look at the connectivity on each of these regions. And we can piece them together to to get the same bounds on the, the whole lattice connectivity. The way we do that is, you know, we basically ignore in some sense, the, we weren't requiring that we have the same connectivity from any sets of vertices but just the ones on the boundary. Because if you're going to have an infinite cluster, it's going to have to connect to something outside and has to go through one, at least one of the boundary vertices. So what we look at the set partitions of the boundary version, here's where we get into more of the discrete master. So you can wake up again. So every configuration of open and closed edges determines a partition of the set of boundary vertices into, you know, the different blocks that are connected in different connected components. So like if there are 10 boundary vertices like in the regions that I showed, you know, notation that I use is that, you know, these are the various blocks so 123 and six are all connected by open edges they're in the same connected component within the substitution region, four and five are connected in but in a different component and seven eight and nine are connected intense not connected to any other ones but always any configuration will give us a partition of the boundary vertices. And like this is a huge reduction of things to worry about is for the martini less substitution region there are 28 vertices. So if you open or close those two to the 28 configurations which is a bit more than 268 million. But the bell number of 10 that there's there's only 115,975 partitions of the boundary vertices. And so we will look at probability measure on the partitions, rather than looking at the configuration so we get a huge reduction. So the key thing is look at partitions and for each partition, we, we want to figure out its probability, both on the martini lattice on the fixed probability measure at criticality, and on hexagonal lattice with the parameter in there. And so on the hexagonal lattice, you know, any configuration, you have some number of open edges and some number of closed edges and so you get, since they're all independent of each other. So you really get probability for that configuration of P to the M times one minus P to the M. And you may have several different configurations. You will have a lot of different configurations give you the same partition. And you add those probability polynomials up and you get the probability of the partition. So for some really deceptively simple, the looking ones for the hexagonal lattice site model the probability of all the boundary vertices being connected is P to the 13 times four minus three P. And for the martini lattice model at criticality, where I've, I've used a relationship between S and T so that we got it all in terms of T. And I believe they're all connected is, is, is this. But when you're looking at things like the example partition that I gave here and so on. For that, you'll get things that are have, you know, 36. Well, not that many, but 24 terms and, and coefficients in the 10,000s and so on. So they get, they get to be really bears. But in principle, you can calculate all of these and we've developed, you know, programs to do it where initially to do something like this with six boundary vertices was taking three days and now we can do the 10 vertices in about an hour. So that's not the constraint anymore. Probably most of you know about you know both sets and so on the partition lattice, you can order the boundary partitions or partition set partitions by refinement, where one partition is less than equal to another if every block of it is contained in a block of the other one so if you take one partition and break it up into further break the blocks up further than you get a refinement. And so you get a partition lattice which I've in general on in set of n boundary vertices piece of n. The only one I can write down any reasonable form here is piece of three and this would be the host diagram of it. So, you know, you have all three connected or you have the different choices of two connected and another one not connected, or they're all disconnected. And so you have this kind of a partition lattice structure and we would do that with set partitions of the 10 boundary vertices. And the stochastic order I'm interested in is you have two probability measures. Then you want to compare them on the basis of upsets. It's basically, well in probability there's if you think of a certain value as being big you define your, your, your term, you know whatever value consider big and you look at the probability of being bigger than that. One probability measure is bigger than another stochastically larger than another if, if it gives a bigger probability to being big, no matter how you define big. And in the partially ordered set setting that would be for every upset you want the probability of one measure to be bigger than that and the other so if we have measures P and q. And the probability for, for every upset the probability of you is less than or equal to the probability under Q of probability P of you is less than or equal to q of you. For every upset then you have q is stochastically larger than P. And that's basically just the idea of running the parameter up and down far enough to get the connectivity higher in or lower than the reference probability measure. So that's the basis and when you have stochastic ordering, then you have this coupling. And you could in theory construct realizations where one was richly connected to the other, and you do that for every copy and piece it together and you have a realization that says coupling for the infinite lattices. So I think that's, that's what this is saying by the equivalence of stochastic and coupling. If the measures on on the substitution regions are stacked as we ordered that we can have realizations that are stochastically ordered and thus dominating more richly connected for the whole lattice by patching things together. So we want to set the parameter the solve model at its percolation threshold for the martini lattice somewhere at one of the points on its critical surface. And then we, we look at running P up and down for the hexagonal lattice and until we get it stochastically larger. So it's more richly connected to something that's right on the boundary of the region of where, where infinite clusters or exist, or we run it down far enough so we get the smallest p that gets stochastic domination from above is not for bound, and the largest one from below is lower. It seems relatively simple in terms of the concept of you just looking at how richly things are connected on finite regions and using that to imply what happens on the infinite region. It's in terms of upset inequalities but you can convert it to upset equalities because equations because the reference model has a fixed probability measure the probability of the upset of the solve model is just some real number. From between zero to one. And as a function of P the probability on the hexagonal lattice is increasing from zero to one for the upset, typical upset. And so there's going to be unique solution for when they're equal. And below that is there's domination one way and above the other way so all you have to do is make sure that you're above all of the solutions of the upset equations or below all of the solution so that's what this cartoon is supposed to indicate that if we calculate all of the upset equation solutions and mark them on the interval between zero and one, then when you're above when P is above the largest one of those solutions, then all of the upset inequalities are satisfied one way and your unsolved problem probability of really major is successively larger than the reference majors. And if you're below all of them, then it goes the other way. So that gives you the, the smallest and the largest upset equation solutions are the lower and upper bounds. So, just boils down to doing the calculations but that that's where the real problem lies. The real problem finding two needles in a haystack to imply that it's harder than finding one needle in a haystack. We're just looking for the largest and smallest solutions of the upset probability equations, those are the needles. But the haystack is, well, you have to solve all of the upset equations. One way in principle, the number of those probability inequalities or upset equations grows super exponentially. And so for, for 10 boundary vertices. There are at least two to the 42,525 different upsets. Now where that comes from that's a very crude lower bound. Because if you're looking at the, the host diagram, the partition lattice is a is a ranked post set, and you look at the rank with the largest number of elements in it. And that is has this many. And if you consider the top element and any subset of those. Each possibility gives you a different upset. Not many more than that, but this is the number of elements on that largest rank. So there, there are that many subsets that you can choose, including the empty set, which would just give you the top element. So you get a huge number of upsets and you can't possibly in the lifetime of the universe to have a computer that's going to get through those. I look at it as, but it basically would approve that the needles aren't in large portions and do various reductions of the haystack until you can actually do the calculation. And this, this work, getting new bounds for the hexagonal lattice will be the first time that I've ever been able to do it with a 10 boundary vertex region. I've done two eight boundary vertex regions before, but this one is the first 10. And that's why we're getting the good results, both really because of being able to do 10 boundary vertex region, and because of being able to use the martini lattice critical surface as a comparison. So what are some of the reductions. Well, instead of working with to calculate the probabilities instead of working with configurations. You can find the, you can run through the configurations to calculate the probabilities of some small regions. And then if you think of identifying some of the vertices together to build a large region. And if you run through the five different in this case it's relevant to a different. Well, it's relevant to this one. You know, for the three boundary vertex region here there are five different partitions. And for this one there are five different partitions and you can look at the ordered pairs. And see if you identify this vertex, then, you know, what happens in the two things are independent so you take the probability of the partition here times the probability of the partition there. And that gives you a probability of the new configuration there. There may be multiple ones, which are grouped in colors, you know, all three of these ordered pairs of partitions on the two parts give the same, the same partition on the bigger region. And so, while there are 25 pairs of partitions only 13. So you reduce it by about half to get 13 different partitions that have positive probability of the bigger region. And then you can use that to build larger regions and so on. And there's just the computational savings are huge. So that's what allowed to speed up the calculation of just the probabilities before you have to start dealing with upsets. You can calculate not the probabilities of configurations but iteratively calculate the probabilities of the partitions themselves. And then another major reduction is that for planar lattice, like the hexagonal lattice. You can use non crossing partitions that I'm having written down the technical definition. I don't know how many of you know, know that. So I'm going to order the vertices or the elements of the set consecutively and I like to label them around a circle for visualization. I'm a visual learner. And I guess the technical definition is if you have elements, ABC and D, where A and B or A and C are in the same block and B and D are in the same block, then they all have to be in the same block. So I look at it as having the vertices listed and I put a rubber band around the ones in a block. And, and if none of the rubber bands cross each other. This is a non crossing partition, because the blocks don't interfere with each other. This one. Cross. If, if you had an open path that connected a one to three and an open path that connected two to five. Since it's planar, they would have to intersect and so all four of those vertices would be in the same block. So, and so crossing probabilities, crossing partitions have zero probability and we can ignore them. So that gives us a big reduction. Well, the bell number, the number of set partitions for the 10 value vertices is 115,000 some the, the number of non crossing partitions is a Catalan number and there's only a little bit less than 17,000 so it's about a reduction of one set to one seventh, which makes a huge difference because it's the numbers such as these that occur in that exponent of two to the something for for the number of upsets. So it's a huge reduction. And then there's a symmetry reduction also, it's a little bit more valuable also is that, you know you have partitions that like 126345 and so on that if you rotate them. They will have the same probability. And this, you know, and reflections will have the same probability on the martini one you look at the intersection of the, you know, the, the, the, you can develop equivalence classes based on rotation and reflection that both lattices have. So in this case, it's just the reflection for right to left. You can group the, the set partitions into classes, and you can prove that the optimal value is going to come only from an upset that is made up as a union of classes, where no classes are split up so you can define a class lattice and the refinement one class is a refinement of another with any partition of it is a refinement of any partition of the other one. And then the class lattice as much smaller also. And so that reduces it down to 3355 classes. So these, these reductions are very important and that's what makes it possible to do this. And then I end up not actually solving the upset equations anyway but we put it in a network flow situation. Check my time. Okay. And so, you have your, your class lattice, and it's refinement order, and you set up a network flow where you have a source and a sink. You. The nodes here. This is the set of classes. And this is exactly the same set of classes. And you label your the capacities from the source to class is the probability of that class. Enter one of the majors, majors, probability majors, and the for the going from the second set of classes to the sink is the other major. And the capacities in between are all one just so that's high enough to not interfere with things. And you want to know if you can pump a flow of one from the source to the sink. And what that does, if you can do that, then you're basically saying that you can take the probability major on the probability P probability major. And when these are all going from a class to either itself or something that's lower in the refinement order. So you're saying that you can start with one probability major and get the other one by letting the probability only go down. So that means the first one is more richly connected than the second one. And that's exactly what we want. So you, this is a way of checking stochastic ordering using network flow algorithms and we just use the augmented path standard thing so far we haven't had. We don't have a network flow person. But, you know, there are probably some algorithms that could speed up things a little bit, or in different ways but that that's the idea of how we do it and we do this symbolically in Matlab to get the bounce and we really just start off we do a binary search over the value of P try P for a certain value of P so that we see whether we get stochastic ordering. If not, you know we go to the next higher one, if we're looking for upper bound and, and so on just reduce the interval that we're searching by half each time and go till we get the precision we want and I go until we get. We're in within 10 to the minus eighth of, of the, the best value of P for the upper bound and similarly for the lower bound. And then, but we do that for a particular point on the critical surface of the martini lattice. And then we want to move to a different point and do it there and see if we get better bounds, and I've kind of developed a bit of an adaptive search it's faster than binary search for searching along that critical surface. So, and I continue on until I get the best. Along the whole critical surface get the best bound to six digits accuracy that I can get for each number bound and lower bound and that's why I report the six decimals there. So, just about right time I guess it. So, what I'm doing is, I'm still working on hexagonal lattice bounds as soon as I get my new computer there's this for tile solution substitution region I mentioned with 10 boundary vertices differences the one that I just completed. And, and got the improved bounds on had 3955 classes. There's no problem working out the probabilities in the classes, it's doing the network flow stuff that where we run out of memory. And there are, but so I can, I know that for this 10 boundary vertex region with four tiles there are 5,815 classes. So, that is about what 60% bigger than this. And I know that very roughly what what I need is, so if this is 1.6 times as big, then I could do this one. This one will take about 1.6 squared times as much memory. My current computer has 32 gigabytes of RAM and the new one will have 128 so certainly should be something that's doable. And then there are a lot of other problems that I should be able to do 10 boundary vertices on to so I think I'll be able to just, it's supposed to be a dedicated desktop that will just run here and run pretty much 24 hours a day for a year running different problems and on this. And square lattice is the real grand challenge that's when everybody would like to know better. I have an idea for using for doing that which has to do it in different stages, you know, change compare sub graphs in of one type and then move, you know, to get bounds and then use those bounds to go another stage and so on. The current bounds there either up or lower bounds haven't been improved in over 20 years so that's that's another grand challenge. And then with a new computer, I'm going to take a read look and that all of the bond problems that have been worked on before and the site problems. There's been very little it's been done with site problems in the past because there are a lot of cases where there's more than one. Vertex going out of each boundary vertex will more than one edge going out of each boundary vertex. And so, where a bond problem might have a region might have eight boundary vertices that because we have to subdivide the edges going out the corresponding site model might have 16 boundary vertices. And there are a lot more partitions of 16 boundary vertices that are eight, which is what blows the memory. So with more memory I should be able to attack more of the site problems as well. So, with that, I think I've covered everything there's some references of relatively recent papers about the substitution method and some of these results about the Kagame lattice and the 312 squared lattice and so on. In case you're interested. So, thank you very much for listening. Those of you who are still away. Thanks, John. If we could all thank our speaker in some way then we'll go ahead and open it up to for some questions. Thank you very much. I should stop sharing here just at this point. See your faces. Yeah, yeah, yeah, well you might want to reference back in the talk but for questions. I can always bring it up again. Yeah. I haven't seen any chat so. Okay, thanks. Yes, do we have any questions for John. A lot of it just overwhelming not not to the screen math part but I of course had have to be kind of vague and try to explain it more intuitively about this to cast according to coupling and so on. Although I suspect some of you have seen some things like that. Stephen, I have one, I have one quick question. If, if you run out of memory with your bigger computer. Is there some work around where you can do swap out pages or something like that I know it'll slow stuff down. Yeah, I'm as as people keep telling me I am I am the least computer oriented person in the department so I can walk to anybody else and they can suggest that I do such things. Something and then I say, okay, can you explain how I do it. I usually get a new graduate student to look at it for you and that's been my technique. So I, right now, you know, the group that's coming in this year once they pass their introductory exam as we call it we don't have qualifying exams anymore we have an introductory exam, which covers basic linear algebra and analysis and probability basis for what our applied math and statistics people need. The way that people use the phrase that I invented was it's what the mythical well prepared undergraduate would know that usually takes our incoming PhD students two or three times to pass it. We offer it every August to every January. But I'm, I'm thinking that the next, the next graduate student I go after once they pass the exams you aren't supposed to go after before that. I'm going to look for somebody who's, who both is likes to do computing, and wants to study something involving network flows. There's no probability of the other stuff that I want them to know it otherwise. Thank you. Some of them might be parallelizable. Yeah, although I don't think the network flow party. That's that's what it's always come down to, you know, some of the other things have been, you know, paralyzable and stuff that that network flow is sort of this one thing where you have to keep, you know, trying to feed something through and then go back and feed it through somewhere else and keep track of all of that. How massive are the constraints for the network flow. I'm wondering if you could feed it to an LP solver or something. People have told me no. You know, this, the one of the problems with that I have with is, you know, people can can run this, you can run it through standard if it was willing to do it just standard with decimals and round off error and so on. Then it wouldn't be a proof, and you'd have problems because you're trying to push through exactly one. You have to know that it's exactly one and so this is all done symbolic I didn't, I didn't bring out some of the values but you know I started going and it keeps track of the, the rational number it's you know you get an answer. If I dug in somewhere on my computer I could probably come up with one where it's like 137 digit over 138 digit number. You can, you can, in principle have it go back and go through and show you exactly what part of the probability flows through each thing and they add up those fractions and get exactly one. If you're dealing with round off error you don't know that you've got that so you don't know that actually you've proved that it is stochastically dominating the other major. That's one of the issues that there's there's a very good combinatorial optimization person at our department on the top boss who that tells me that that's, you know, that's a problem that you know it's difficult to break up. But it's the other issue. Something popped in and then back out it's it's been a long hard week. But that's, oh, another suggestion that people said as well is that a real is a sparse matrix well. No, because of the things, you know the all those edges to things below to everything equal or below. You know, it's, it's not sparse in the usual sense of people are talking about that because it. You create a flow structure matrix that keeps track of these things and, and that's why it's the 1.6 is squared. And roughly but somewhat less than half of the entries are are zero. It's not like one 10th or one 20th or something or so I can't save a lot of memory that way either. But I wish you know I'm learning more about this but I'm. I even know one of our colleagues I went on a sabbatical to the naval surface warfare center and worked on things involving computer virus epidemics and, and other stuff where we're modeling things using like percolation type of stuff. And unbeknownst to me, the one of the people in my department who knew a person down there that I went to work with one that originally introduced us. He, I kept trying to get him to teach me how to do something more computing stuff because he was a computer with, and he wouldn't do it. And when I got back. He was a mathematician, his friend, Kerry pre told told me that. Yeah, he did exactly what I told him I said, you know, it's, it's, it's sort of like in the movie. He didn't want me to learn computing he says that's not, you know, the movie from long ago that's a good Cassidy the Sundance kid, where at one point, the Sunday kid says, just keep thinking but that's what you're good at. That's that's basically what he wanted me to do. Don't do the computations get somebody else to do that for you just keep thinking. And you couldn't, you couldn't replace the, your, your network with one that's got a bunch of copies of the of the post set and only the hostile diagram edges between the, and the equality. That's possibility I don't know. I don't know enough to know whether that really. I guess, because if you push it through to something much lower, I mean if you push it through to something higher and ask it to go something lower. You'd have to go, sorry. Spam. That would mean going back over and you're really only allowed to go one way and in the formulation that proves that that that network model is equivalent to checking stochastic ordering. So I don't think you can push some of it over and then further down on that in that copy I think you have to push some to one and some to another if you're going to do that so. But that's that's good. Good point I'll have to stumble through that group and look to see that that formulation is from if I remember it's Preston. It's a 1974 proof of stochastic ordering for probability measures on partially ordered sets. It's related to FKG correlation inequality so I can put my hands on. I'm sure I've got that stored somewhere in my archive on my computer but it would take me a while to, you know, get it all in my mind again and think, think through that but I think it's a very specific as sort of as sparse as structure is could be made to work for the proof so that's that's that would, that would have a tendency to, you know, my, my open up the door to using sparse matrix things. Yeah, I have to go but just quick question. What's going on with the non overlapping intervals that are being reported for the balance. Why, why, well. There are different simulation methods for some try to simulate, whether it's infinite or not by trying to follow a boundary and see and if it, you know, they can only look at a finite region and their simulation and if the boundary if it doesn't come back with an infinite region to close and say that the cluster, it's encompassing a cluster that's finite, then they say that well the cluster is infinite. And if they could look at a bigger region they might find it was so they're, they're, they're often in that direction for one others just look at a big region and see, can you reach out to the boundary, and others have something where they look and see look at a big region and can you get from one side to the other. And those are all finite approximations and what they get when they look at different values of P is they get a curve going from an estimate of a curve going from zero to one which is when you're in a finite region, just like calculating my probabilities, the probability of some event you describe like that is a polynomial that increases from zero to one. And so they get an estimate of that polynomial and then what they're trying to do is the true probability of there being an infinite cluster as a problem as a function of P is zero until you get the percolation threshold and then it starts up and goes to one. And they'll get these polynomial functions they'll do it for different sizes and then they'll say well if you extrapolate down the point where it's going to leave zero in the limit is such and such. So there's error due to their simulations and error due to their approximation method where some people assume that this is going approaching at a sixth power rain and some of the third power rain and all this. You know, it's, it's, it's like astrology, you know. And so they'll, they'll give these very confident bounds you know seven dis seven decimals and plus or minus three in the last decimal. And somebody else will give one and, and, you know, that disagrees in the seventh decimal by two and say plus or minus four. It's just, they can't be right, you know, only one of them can be right. Occasionally there are some that overlap, but you know, they just, they aren't taking into the uncertain, the uncertainty in their, their model. And they're just taking in their, their basing it on, we've done so many simulations and the error bars for, for simulating with that number is this, that's the error bar in their function estimates, but then their extrapolation is what's growing the model. Yeah. Well, how many of you are students, or is this, there aren't that many discrete math faculty are there in the department, or else that's a huge department. I'm a student there's there's a few people who come from outside of the department. So she was a student last, last, last year he graduated now he's at Georgia Tech. Similar stores department at Georgia Tech. There's a school of mathematics there on math. Okay. One of my former, my first undergraduate research student is a full professor at Georgia Tech, but in computer science. Eric Vigoda, if you, if you have. Yeah, I've been doing undergraduate research for about 30 years or so now on. It's been remarkable, you know, I guess I somehow succeeded in the, what the doctors, you know, the expression is, you know, first do no harm. Eric was one of my, my first one and he turned out to be sort of a star and one of my other ones was a guy named Devar Koshnevasan, who's at Utah as a professor but he's a. He's got a prize in stochastic processes and, and Rolo Davidson prize and he got that. So, I must not have heard him too much. Clemson joke. Go ahead yeah. Okay, some of you may have missed the first one but he can tell it. This, these are jokes that I hear in my statistics conferences and so on there are people at Auburn and Alabama tell jokes about each other and people at Clemson and South Carolina tell jokes about each other. The rivals, and I've heard the same two jokes about all four universities. And so I told him the one about a bridge before but this one is that there are two Clemson graduates. They're working nailing shingles on a barn and, and it's really tall barn is one guy standing at the bottom to study the ladder and the other ones up nailing the shingles and, and you there's nailing and then then a nail gets thrown down and there's nailing and some more nails get thrown down and so on. Being a comes in graduate and relatively slow after half an hour, the guy at the bottom says, hey, why are you throwing down all those nails. And the guy says, well about half of them are pointing the wrong way. And then after another half hour the guy at the bottom we all suppose you idiot you're supposed to save those to use them on the other side. Are there any other questions for for john before we, we had out here. Thank you very much. Thanks john and have a good weekend everybody.