 Good afternoon, it's a pleasure to be here and I really want to thank the organizers for inviting me as part of this conversation between the geometers present here and the probabilists. First I'll emphasize some probabilistic problems that have led to geometrically interesting configurations. The main topic I want to discuss is methods of fair allocation and the resulting geometries. This is part of the topic I have worked on for about 10 years with Alexander Holroy, who is also at Microsoft, but through the years Robin Pimentel, Oded Tram, Chris Hoffman have participated in this. And as I'll come later to collaboration with others on this problem, so what do you see here? Yes, there should be a microphone. That's for the recording. Okay, so I assume. So let me start with one scheme. All right, here we go. So the basic topic that we'll see from different approaches, we have a random array of points. The pictures will be finite, but it's good to imagine that this random array of points extends throughout the whole plane or the whole all of space. But formally you can think of taking a very large cube, putting points at random there, one per unit volume, and passing to a limit as the cube goes to infinity. Of course in the pictures we'll only see finite cubes. So what do you, and then usually they'll be wrapped around. So what you see in this example is 500 points thrown at random. These are the centers, so we'll call them centers or stars to distinguish them from other points. And the basic problem which we'll see different approaches for is how to allocate in a distributed way to each one a unit of area. We assume that the area is exactly 500 and the number of points is 500. How each point gets its fair share. And the method that created this picture, you can think of each point as growing a sphere around it at unit rate and capturing all the area that it reaches first until it is sated. So some centers like this one are lucky, they get sated within a short distance around them. But other centers that are hemmed in have to go further. So for instance if you, so there are many examples of this but which one which is easy to see. So even this one, see it gets some area around but then it doesn't get its full share so the disc keeps going and eventually it completes its area quite far away. And whenever you see in this picture some kind of low curvature disc, it signifies that some center had to wait a very long time until it got its unit area. But eventually everyone is, everyone does get sated because there is exactly one point per unit area. So a point can control a disconnected center. Exactly, exactly yes. So other methods will yield connected sets so different methods have different features. So this method has some virtue of stability. So as I'll explain it's related to the Gale-Shapley stable marriage algorithm. So this part of the talk is based on joint work over several years in papers as I mentioned with Chris Hoffman and the Holroyd, Robin Pimentel and the dead drum. So again this is the setup, we have points and points and so we'll call these points centers and the other ones sites just to distinguish them. So the centers are ones that will want to get their unit of volume. And so formally when we take these boxes, put uniform points in them and take the limit as the box goes to infinity, this converges to the Poisson process. But it's fine for purpose of this talk to just think of this limiting procedure as getting the points. Now really for most of this the points could arise in many different ways. And so you could for instance take a lattice and put random translations. So every lattice point move it independently that will give a different kind of collection of random points. Or there is another kind of point process very important in this type of problem has arisen from random analytic functions. So you take a series like this, some a and z to the n, normalized by root n factorial and where the a n's are complex, what? Oh sure, sure. So if you look at this power series where the a n's are complex normal. So I have to mention this here for several reasons as you'll see. So each a n is independent and its real and imaginary part are just independent Gaussian random variables. If you look at this series it has the remarkable feature that the zeros have a translation invariant distribution. So this series has been investigated quite a lot. Key names are Sodium, Teroson, Nazarov, Volberg. And it's also highly related to earlier work of Schiffman and Zeldic. And so as I'll come later to say that this series played a role. So the zeros of this I said are a translation invariant process. And okay, so as I mentioned we want a fair allocation. So meaning almost every site should be allocated to some center and the territory allocated to a center should have volume one. So I told you how we want to create this partition that you saw in the picture by this ball growing procedure. One problem when you try to formalize it is especially when working in infinite space each the process, the volume allocated to each center well that depends on the volumes allocated to other centers because if they reach some area first then the center can't get it and those in turn depend on others and so on. We have examples of processes that seem to be defined this way but actually don't exist. So one has to be careful in defining a process with infinitely many which has infinitely many sub-processes all evolving in the continuum each one depending on the others that some naive definition that you might think makes sense actually doesn't. So to rigorously construct this process it's good to observe that if it existed what properties must it have and then we actually use these properties in the construction and the property it must have is stability. So what would be an instability? So we're looking for an allocation Psi and an instability would be a center and a site here in this picture Psi and X that prefer each other to some of their current partners. So what does that mean? Psi prefers X to... So in this imagined allocation Psi prefers X because it's closer to it than this site and X prefers to be allocated to Psi than to this far away center. So this would be an instability in the sense that both the centers and the sites have their preferences here just according to distance and an instability would be a center and a site that prefer each other to some of their partners or for the center it's to some of its allocated area. It seems like a continuous analog of the stable manner. Exactly, it's a continuous analog. So already in the original, I'll come back already in the original Gale-Shapley paper they discussed about college admissions as a version, many to one version of the problem. So the point is if you think of this kind of instability it can't... So if this informal ball-growing procedure I described to you, if it is to make sense it cannot create this kind of instability because if this... Right, if this X, Psi as it grows the ball it will reach X before this center reaches X. So the only reason Psi might not take X into its territory is either it's already sated but then if it's already sated by distance X it wouldn't take this point or the other possibility is X is already taken at that point but if X is already taken at that time then it can't be captured by this further center because if X is already taken it means it's been taken by some closer center. So this kind of instability can't arise so if the ball-growing procedure exists it must create a stable allocation. So we can now turn this on its head and ask well can we find some stable allocation and then if it's unique then that's what has to correspond to this procedure I described to you and indeed for any invariant point process there is a unique stable allocation. So this differs from the classical Gale-Shapley setup where you can have more than one stable matching and I'll explain where this difference arises from. What does a-period s-period mean in the theorem? Almost surely. So with probability one. Thanks. Okay. Okay so let me quickly remind you of this classical Gale-Shapley paper. This is by the way the most highly cited paper in the monthly ever in 1962 and so the basic setup has well either men and women or colleges and students each side in this each side has some preference ordering over the other side and the original problem that actually David Gale had for is the one who thought of this setup what is to find a stable matching so stable meaning that there is no instability. So again an instability is a pair in this case a man and a woman who prefer each other to their assigned partners in the matching. So already the simplest case shows you that you could have non-uniqueness that is if this is an invisible color if you have if these are the preference if you have just say two men one and two and two women A and B if these are the preferences for the men and these are the preferences for the women meaning the arrow points to the most preferred then in this case both possible matchings will be stable. So stable matching in general is not unique and originally it was a problem whether it even exists because as pointed out in the Gale-Shapley paper if you think of a roommate problem so you have n people and n is even you want to divide in pairs and to become roommates and again each person has ordering over the others then in the roommate problem there need not exist you can just find a simple example already with four people that there need not exist a stable partition. So and similarly they generalize it to the setup of colleges and students every college has an ordering over the students student has ordering over the colleges and this time you know every college has capacity to admit many students but still the notion of stable allocation is the same as before we don't want an instability and there is a stable allocation in general it's not unique while not widely used for colleges and students actually this method is used to assign residents to hospitals so what is the Gale-Shapley algorithm so in the original set up say in the men and women each man proposes to his preferred to the first woman on his list each woman rejects all but the preferred man the most preferred man that has applied but that has proposed but to the top one she doesn't say yes she just says maybe rejects all the others all the others go to the next on their list and propose to her and then again each woman reassesses all the current proposers rejects all but the top proposer and this process continues until every woman has a unique proposer and that has to be a stable matching because we see that instability couldn't arise in this algorithm and again version of this algorithm is actually used for assigning residents to hospitals actually note that there are two versions whether the men propose or the women propose and this in general will lead to different stable matchings it leads to the same if and only if there's a unique stable matching if a woman says maybe to a man does he go on proposing or he also no no he waits he waits also right because she's on the top she's on the his preferred choice except for those who have already rejected him so he will wait so the same so the same thing can be used in our setting so each site so these are the points in the in the plane applies to the closest center that has not rejected it okay so say this here's a site or a bunch of sites they've applied to this center and this center but been rejected so they apply you know next to fly to this one and each each center rejects any of the current applicants that are that are too far namely it if it had so it just wants a unit of area so if it had more than a unit of area of applicants it rejects all but the closest unit of area okay and those those applicants continue to other centers okay and this and you just iterate this and and then the one thing that needs to be proved is that for almost all sites so points in the in the plane or in space they will only be rejected finitely many times so eventually they will stop being rejected and then that's the center they'll be allocated to and and and this is a picture of the stage one applications so you can see what you see in the stage one application is just the Voronoi tessellation every to every center the applicants are just the closest the closest sites so that'll just give you the Voronoi tessellation then this is a stage one shortlist so every center rejected every center that got more than a unit of area of applicants rejected all but the closest unit of area and then this is a stage two shortlist stage five and you know in the limit this is the picture we get and this because it's a discrete procedure it's much easier to analyze and show that it does converge so then the proof that the proof that this is stable is the same essentially as in the original Gale-Shopley algorithm the fact that every center is sated and almost every site is allocated takes a little more work and needs the translation and variance of the process so I won't go through it slowly but just mention that just having density one is not enough if you look at a lattice and remove one point from the lattice and now run this algorithm there then this is the kind of picture you'll get and it has this, so this is the lattice point that was removed and there's a big hole around it but there actually are infinitely many small holes that arise here as well but this doesn't happen at all if you have a translation invariant process so some geometric facts that we do know about this process and all these papers are available in the archive and if you want to see more so each center has a territory of finitely many bounded components maybe I'll skip this and mention that in order to understand the long-range correlations of this model it's good to think of it inside a family of models with an extra parameter alpha which is the appetite of every of every center, so the centers have density one, one per unit area but now imagine they have an appetite alpha which could vary so for alpha less than one the picture is subcritical so there'll be some left over area for alpha bigger than one, that's maybe the appetite but not every center can be sated so here's the picture for appetite 0.2 this is 0.8, this is appetite one what you've seen before, now initially we might think that the picture would get more and more wild as the appetite grows but in fact the wildest picture happens at the critical case alpha equals one as the appetite grows then these regions actually become more compact in the sense that the fraction of area that's far away from the allocated center goes down and you can see that the case appetite infinity so a center is never sated just reverts to the Voronoi tessellation again which these are nice polygons with exponential tails on the distance so the chance that center has a site allocated further than r away decays not really in this case in the Voronoi case but it decays only like a power law at criticality and what we know is alpha equals one is the only parameter where we have power law decay at all other parameters it's exponential actually exponential in radius to the d can you say again please what appetite means I didn't do thanks so appetite means every center gets sated so if you want to think in the process of growing balls every center has a ball growing around it captures all the area it reaches first until it is sated it's sated when it reaches area of alpha that's this appetite parameter and it also plays a similar role in the Gale-Schapli algorithm just every center when it gets applications it will reject all the alpha closest area so in the appetite infinity center just gets everything that's closest to it so it gets the Voronoi tessellation so okay so the key thing is when alpha is not critical and alpha is not one you have very good control of the distance that a center can get or here we think parameterizing a different way look at the origin which is just a typical point here and see how far is it allocated where's the center that the origin is allocated to so that's this x and if alpha is different from one then x has better than exponential decay so this expectation of e to the x to the d is finite while at criticality there are power laws though the exact power law is known only in dimension one so in dimension two we know it's a power law but what power is not known so let me show you at least a proof why the domains allocated to every center are bounded so this is a simple law it's just the tiniest bit of geometry and it's interesting because it shows that domains are bounded without giving any bound on how big they actually are so say that the center is bad if it has an unbounded territory if points arbitrarily far out get allocated to the center and there were bad centers then because everything is translation invariant then the ergodic theorem would tell us that there would be bad centers everywhere so we would have positive density in particular each of these sectors would have to have bad centers as well and then so this is a sector less than 60 degrees and you put a bad center in each of these now because of the definition of a bad center that it gets area from arbitrarily far out these centers kind of block the middle center from getting any area so any sites outside this outside this disk they can't be allocated here because they always will have another bad center that's closer and this bad center is still hungry so it will condition of bad center it will still want this area and this area so by stability it can never be allocated to this one contradicting the fact that this center got area arbitrarily far out this is an appetite one okay so let me skip this and there are two other allocation schemes I want to tell you about so this is an allocation with connected territories unfortunately we don't know they're bounded although they seem to be bounded and this one was constructed by Maxim Krikun and let me just quickly tell you what's the idea of this one so this process you start with the infinite collection of points these random for some points then you build a minimal spanning tree on these points so this is so we know what a minimal spanning tree and a finite set of points is right just a spanning tree where the total edge length is as small as possible what does a minimal spanning tree mean here well how do you construct if you have a finite set of points that's one of the easiest algorithmic problems to construct a minimal spanning tree the greedy algorithm works you can you can put in you can put in every edge so let me put it this way so you can put in every edge unless there is a cycle where this edge is heaviest in the cycle so every edge so if you think of initially potentially all edges are there you remove any edge that's the heaviest in a cycle that's one way to think of constructing the minimal spanning tree so you see that in the finite case if you remove every edge that's the longest in a cycle then you cannot have any cycles and in the finite case it's easy to see that you still have a connected graph this way in the infinite case it's not immediate so this is a theorem of Ken Alexander that if you do the same process in two dimensions so put in every edge where there is unless it's the longest in some cycle that still leaves you with a connected tree so you start with this tree this minimal spanning tree and this is what you see here in black now the complement of this tree is simply connected so Riemann mapping theorem applies and you can map the complement of the tree to the upper half plane this is a nice case to think of the power of the Riemann mapping theorem because if you think of this infinite tree going in space because usually when we teach the Riemann mapping theorem we draw some nice domain where you can kind of see how you're going to bend the domain and move it to this but if you think or draw on a large scale this infinite spanning tree and think how are we going to construct this conformal map that's going to work and map this to the upper half plane I mean there are algorithms but it's a little scary to think don't I get run into problems but of course the complement of the spanning tree is simply connected because of the tree property you can't have so you can't have any non-trivial homotopy so there is a mapping there is a mapping to the upper half plane and the vertices get mapped to points on the line except that every vertex is going to be mapped to several points on the line according to the number of adjacent domains and say here every center wants a unit of mass or unit of area so we're going to just subdivide the area between the different so here this one wants the unit of area it has one, two, three adjacent domains so this one will be mapped by the Riemann mapping by the cathedrals theorem you'll get the mapping will extend to the boundary this point will be split to three points in the image and each one of those image points will want mass one third that will be its appetite now you run the stable allocation algorithm that I showed you before you run this in the upper half plane with using the Euclidean metric in the upper half plane but the area measure coming from this side so we have the Riemann mapping so every center here starts to grow this disk around it but it decides whether it's stated or not when it gets the right area in the pre-image so if this one wants area of one third this area is not measured here it's measured back here but the distances are measured in Euclidean metric here and when you do that it's easy to see that every point here will get this full will have preference on this line so it will get every point on this line that it prefers it will have precedence over any other center here so its region will contain as much of this line as it wants and from that it's not hard to see that it will get a connected region what is hard to see and is still not rigorously proved is that the regions allocated this way will be bounded once you find them you can map them back using the Riemann mapping and this is some picture of what you get the domain seem very bounded but there's no proof of that so I want to now switch to I want to switch to the third and perhaps most most interesting allocation scheme so let me get the right file for that so this is joint work with Shurov Chatterjee, Ron Pellet and Romick appeared in the annals about three years ago it's based on earlier work of Nazarov Sodin and Volberg which was in the context of Gaussian analytic functions and that was kind of crucial for our work so this is just repeating the kind of setup the kind of allocation we'll get from this process which is gravitational allocation but if you're it does yield provably connected domains the one problem is that for random points for uniform random points this method will only work in dimension 3 and higher it doesn't work in dimension 2 so in dimension 2 you can do this in finite regions but not in the infinite plane however you can do it in the infinite plane so this process has more rigidity than the Poisson process so it does work for these Gaussian analytic functions or for a perturbed lattice so I'm going to skip kind of the general background on this process that you've seen and go to our work as I mentioned so Nazarov Sodin and Volberg were interested for slightly different motivation in allocating area to the zeros of this Gaussian analytic function that I wrote here and they defined some gradient flow allocation although the process of the zeros seems like a more complicated process than the Poisson and it is more complicated in many ways for the purpose of this allocation it's actually simpler anyway let me jump to the way we defined the allocation so it's based on gravity so again you have Z are the centers or stars that are going to get volume allocated to them and on every other point in space X we have a force of gravity which is just the sum of the forces from all these centers or stars so if we're in D dimensions we're going to use Newtonian gravity so the force of gravity is proportional to distance to the power D minus 1 and you can or inverse of that and you can realize that by just taking the vector Z minus X and dividing it by distance to the D so this is a vector of force pointing from X to Z and its magnitude is the distance to the power 1 minus D and now we want to sum these forces over all the stars Z and that indicates the force acting on a point X so this kind of force was first considered by Chandrasekhar in the 40s and he already observed that this series so it never converges absolutely but even conditionally it will only converge in dimensions 3 and higher so if you just add the series in dimension 2 it's always divergent when the centers are random but even in dimensions 3 and higher it's important in what order you do the summation so here in order to get an invariant field we're going to sum in increasing distance from the target point X now that's convenient because that automatically defines the translation invariant force it's not convenient for other purposes because if you want to start differentiating this according to X then it's not convenient to have the variable you want to differentiate by including in the defining the ordering of summation so you want to move to summation in a fixed order and you can but it turned out with a strange change so let me I'll come to that okay so in order to understand what's the effect of changing the order of summation let's define it two variable functions so G of X is the force acting on X but when the ordering is not according to X but according to distance from another point U okay so that's G of X and I should say that initially we may have expected that as long as you add in increasing distance it shouldn't matter where is the center because you're adding points in some order of shells and X and U are some fixed distance apart it shouldn't for far away points X and U look like the same but it turns out it does make a difference so here's the exact statement so if you look at G U X and compare it to G V X so you keep the point X so all the summons here are exactly the same but the difference is just the order of summation and changing we know that by changing the order of summation we can you know make a series go in non-absolute convergence series go anywhere but here the change looks rather slight we're just changing the point along which determines the ordering but it turns out this change leads to a change in the sum now the value of this sum is a random variable but when you move the center of summation from U to V you get a deterministic change which is just a constant the volume of a unit ball times the vector going from U to V so the proof of this is just coming from Newton's laws of gravity and I think I'll skip that part and just say so once we need to do that in order to write the sum in terms of a fixed point and then we can you know apply the theory of differential equations to that so we look at the flow curve determined by the equation gamma dot of t is F of gamma t so in other words let me show a picture show how these things look and it's easier to see it if it's mapped to a sphere so here's okay so this is the same thing happening on a sphere so it's easier to see so again for every point in space you look at this force field determined by all the centers or all the stars and now the point is going to move according to this gravity but this is a Aristotelian movement that is the gravity is used as a gradient field rather than an acceleration it's used as a velocity of an acceleration so every point when gravity acts as a gradient field every point will fall into a star it won't start circling stars because there's no inertia so for every point the balance of the forces will lead it to go to some star it's not necessarily the nearest star except for a zero measure of boundaries where the forces balance so the formula you wrote that we might have thought was a force is a velocity but we can define the gravitational force but then the force we use it as a gradient field and here the beauty of it somehow why this method is preferable to the others is that here as I'll show you we get unit of volume per point without kind of forcing it so if you remember in this stable allocation picture center got sated when it reached unit volume here nothing you know this so the centers were partitioned so I mean the centers were laid down so that there's on average one center per unit volume but this is just on average there's some areas where they're denser there's some areas where they're more sparse but this method yields exactly unit volume per each center and the proof is just the application of Green's formula which I'll show you next all right with me but that's a big surprise but that's a big surprise so in fact this was first this connection was first observed in the Gaussian set up of zeroes of Gaussian analytic functions and then exploded by and their analysis of that Gaussian function and we transported their idea to this set up so this right so this is our theorem is for Poisson so for random points in dimension 3 and higher but then Nazar so in Volberg so prove this for the zeroes of this function in the plane although they didn't represent it this way so they represented the force not in terms of this sum but rather as the as a gradient of log of the so this is the function f they looked at the function log f and used its gradient to determine this force in this case there is a convenient it's easier to define the force we have to construct the force by hand but turns out the idea is similar so as I told you this is the force field that we're going to use as a gradient field and because we need to differentiate the force field we rewrite it in order around the fixed point which we just choose to be the origin but then there is a correction term coming from changing this summation which is just this constant the volume of the ball times x so now y equal area in each basin this is to take a basin of attraction b of z these are all the points that fall into a certain star z and okay and look at a point x which is on the boundary of such a basin if n is the outward pointing normal vector at x then normal is actually orthogonal to the force at x because the way these boundaries are defined the force there is exactly balances so the force is along the boundary if the force would point into the basin then this would not be a boundary this would still be attracted to the star so we get f times n is zero so when we integrate over the boundary of the basin this force times n we'll get identically zero now what's the distributional divergence of the force so we get right so we'll look again at the formula for the force so we want to apply the Gauss Green so we want to understand what's the divergence of the force now okay so here we'll just from this term we'll just get a constant so this correction term is actually going to be important this will just give us a constant here we have singularities and so the force here is just the gradient of the Newtonian potential so if we're going to take its divergence so the divergence is harmonic except divergence is going to be zero except at these singularities and these singularities what we'll get is just the Laplacian of the Green function will just give us a Dirac so in the end up what we get with the divergence is d times the volume of the ball minus again d times the volume of the ball multiplied by these Dirac masses summed over the basin but by definition of a basin it just has a unique star in it so there's a unique singularity so to conclude we write the divergence theorem in this way so this integral which we know is zero is the integral of the distributional divergence of f on the basin and that okay and that's integral what does it give us we have to integrate this constant so we'll get the cap of d times the volume and then we have to subtract this minus the number of stars inside and there's a unique star inside so we get this so the volume of the basin must be one must be one okay so again we got alright so this integral was zero this is the integral of f times the normal so this will be equal to the integral of the divergence on the interior now what's the divergence of this force it has two terms in it one is just coming well so the divergence has this constant minus contribution for each star each singularity contributes a Dirac mass to the divergence but now when we're integrating this just over a single basin only one of these singularities inside the basin the star that determines the basin so we'll just get now we integrate this we'll get the volume of the basin and this will get just this constant times one from the singularity so this must be equal zero so we get one and it's a case now the key here that makes this work is the fact that this force actually does partition space into these into these basins that are bounded and that's sort of a non-trivial point that is if this set if the set of stars you know was not this invariant transition invariant process you could well have basins that are unbounded and then this application of divergence theorem would be illegal so there's some technicality is improving that all the conditions of the theorem hold but this is the kind of the key calculation okay so for these domains we can show in dimension three and higher that they have very good behavior so much better behaved in the stable at the critical case it had power law tails so we don't know exactly what the tails are but we know they can't be too good so in other words a point could be allocated to this to a huge distance r away and the chance of that is only constant over r or constant over a power of r while for the notation allocation whenever it exists that is in dimension three and higher we get exponential decay so the chance that you'll be allocated very far away decays very fast and the proof of that involves percolation arguments and I'm going to skip them and just say that there's we've done more precise analysis in a later paper in gaffa of understanding the long tails that you can sometimes see in these pictures what is their mass but I think I won't go into that so let me show you though the random planar potential that arose from this Gaussian entire function so again the first application of this type of idea was in the zeroes of this function and if this is the function f this is a picture of the absolute value of log f and so here the allocation can be so the absolute value of log f will be minus infinity at the zeroes of f and here the allocation can be sort of this way so look at this surface and forever so this is the potential so the force field we're going to apply is the gradient of that but you can think of taking this surface and at every point you could put a pebble and see if you leave a pebble there where will it roll it will roll into one of these holes which correspond to the zeroes of f and the basin of attraction of every zero is just all the initial locations where pebbles will roll into that location and again in this case what the zeroes realized and the zeroes over proved was that the basin of attraction of every one of these zeroes has exactly the same volume so there are many other nice facts about this allocation so this is one of them which follows from Leoville's theorem so if you look at gamma a of t so these are these are the points that will fall into a but take time at least t to do that then you can understand how that scales in time but maybe I won't go I won't go into that and let me stop here thanks for your attention this motion but with the different masses some of the areas of attraction right so you still need some transition bearing to be sure that the regions remain bounded but for instance if you do this and assign every point a random mass which is also uniform in some interval then you'll get an allocation where the area allocated to every star will just be proportional to its mass so the pictures were too dimensional but I suppose this theory works the same in higher dimensions so the theory for random points works only in higher dimensions only in dimensions to be in higher so the problem in dimension two is that the force as you the force act when the points are at random there's too much volatility the force acting on a point doesn't converge but if you have another array of points that are not as wild as this completely uniform random points then this makes sense and this works in two dimensions so I started with this gas analytic function but a simpler case is a perturb lattice start with a lattice and take every point in the lattice and give it an independent random perturbation then this will work in two dimensions but kind of our main interest was in this Poisson process the limit of random points and actually do all of this is in higher dimensions even though the pictures in the very beginning were two dimensions the pictures yeah the appetite I thought you talked about gravitational so the stable allocation has the advantage that it works in all dimensions including two so it works in two and higher there's no difference there between two and higher dimensions is there some possibility to say something about the topology of this set because it could be disconnected in dimension two but maybe it's it can always in any dimension you have some disconnect so it's easy to see if a center is hemmed in by other centers then it will get a disconnected territory this is different in the gravitational allocation it's always connected so even if a center is hemmed in by others it will it will create a connected set which will include some kind of long thin tails that go in between the other centers so it's first when you think of the gravitational allocation it's first surprising if this is a center and it's surrounded by a bunch of other centers all around here then it seems and suppose the area near it is much much smaller than one how is it going to get area one well it's going to get some area one which comes here so because these points here have you know will have an exact balance of the forces and miraculously it will get exactly area one because well green's theorem forces it too but when you think of how it actually happens in specific arrays it's very surprising in other surprising cases so you have a lone star and this happens in the Poisson process in R3 you have a star and this is a million away there's nothing okay all the stars so this will rever you know once in e to the million cube this kind of thing will happen so so then you think so here is a center and and everything is very very far away how come and you apply this gravitational force how come it's not getting but in this case it will get almost exactly a ball around it and then if you can you can do a calculation by hand to see if you have points that are outside this ball then they'll be attracted more to this you know these far away stars because and this will this only works exactly with Newtonian gravitation if you change the power in any way this thing breaks down as you see both from the green theorem and form kind of more direct calculations but even after seeing the green theorem we had to do to really be convinced we had to see both the simulations and do calculations in specific cases because at first it really seemed paradoxical so is anything about sort of the interfaces between the bases right so there's some Bernie have you worked on that so in the planar case in the entire function case then I think there would be a number every point is incident to average of 8 basins in every basin to 8 thirds I can prove this in the bounded case on an even sphere so it should be true on a plane you have a paper with Steve Zeldic it's a consequence of Zeldic on an even sphere on the average that's because it averaging in the same thing is true even in the Gaussian in the function case the average number of local maxima is like to be the average number of local maxima is one third in every one third the number of zeros and the average number of critical points of saddle points is four thirds of those zeros so I guess if you use the fact that the reasons I guess if you have to worry how to define it in the unbounded case because some regions may be very long but the probability of being too long is exponentially decaying? Yes it's exponential decaying it's exponential decay was also proved in this Gaussian analytic case and so one just short final point about the power of pictures in this business Is that picture for the polynomial zero? This is for polynomial zero on a sphere so the idea for this kind of allocation as far as I know is first suggested by Sodin and Sirlson and I heard it from Michel Sodin at the conference in Stanford some years back and so he told us this he told me this definition but he said well we noticed it but we're not planning to work on it I was really intrigued but couldn't prove anything about it then I asked the student at the time Anju Krishnapur to simulate this and it took quite a while to find the right way to view it but once we got the pictures we sent them back to Sodin and he said wow I'm going to drop everything and try to understand this and so then that led to the paper of Nasar of Sodin and Volberg which was crucial later in the work but I think this I was already intrigued but I couldn't have made the progress without Nasar of Sodin and Volberg's insights which really were motivated by understanding these kind of pictures thank you