 OK, so let me first of all express my thanks to the organizers for inviting me and to give me the opportunity to speak over here. I was an organizer for a Park City program three years ago, so I have a good sense of how much, I mean, how complex it is to actually run this. So what I want to do is to basically talk about some stuff which is very, very recent and also some parts of it are still somehow tentative because it's sort of work that was just developed very recently. So the talk is all joint work with my student, Vivian Olsiulski-Hilly, who's moving to the University of Chicago in fall as opposed to dark. And the talk basically includes some results from his thesis and also some sort of more recent work. And I would also like to kind of acknowledge the help of Stefan Roder from the University of Washington. So somehow, I mean, this was one of these things where once you got the right idea, everything is very smooth, but to actually get the right idea took a long time and sort of a couple of weeks that Vivian spent at Washington was very important. OK, so great. This thing is supposed to move. Let's try this. OK, so the talk is actually about three pieces. So somehow two of them are classical probability. So Galton Watson-Proed trees, which describe the genealogy of birth death processes. And that's a binary Galton Watson tree. What I'm doing is there's a mother. Mother has zero or two offspring. And it keeps going independently. And the scaling limit of the Galton Watson trees and more generally scaling limit of trees of this nature was sort of understood in profound work by Aldous in the early 90s. And the scaling limit of these trees now should really be considered very subtle knowledge. And so what I want to do is to somehow combine knowledge about sort of Galton Watson trees with the Lovner equation. So what I want to do, OK, so the Lovner revolution is a sort of, let's say, canonical method of developing conformal mappings. And this talk is really about somehow three questions, one of which is completely nailed, one of them which is not quite nailed, but somehow we have a very good sense of what should come out. And the third one, which is a little bit more tentative. So the first question is really can we use the Lovner equation to construct natural graph embeddings of Galton Watson trees in the upper half plane? So what I want to do is I really want to find somehow a natural way of building trees or building sort of conformal mappings which have the property of making trees. In particular, what I want to do is to actually understand the scaling limit of the object upstairs. And so the word graph embedding, so the CRT is actually a random metric space. And the graph embedding over here is somehow when we started out what we really wanted was isometric embeddings. And that's not quite, our work doesn't quite achieve this. There is a notion of isometric embeddings or something called conformally balanced trees. And at the end, I'll sort of introduce these ideas. So in some sense, at first sight, it will appear that there's nothing to do with random matrix theory. What I'm trying to do is just to build up these conformal mappings. But in fact, Satya Majumdar and I several years ago organized a meeting on random matrix theory in India at the time when I didn't know anything about random matrix theory. And so it was really turned on by sort of these amazing lectures that Jean-Bernard Zuber gave at that meeting. And so somehow, all of this work actually originates in that meeting about six years ago, seven years ago. So somehow there is random matrix theory over here. And it really comes about through the connection between map enumeration and matrix integrals. OK, so the talk, basically, there's a little bit of background that I have to do because there's a very diverse audience. And I feel Houston speaking about other people's work. But this is truly necessary. And there are these several beautiful papers by Lagal, which basically describe the CRT. So I want to introduce you to the CRT. I want to introduce you to the Lavner Revolution. And then I'll tell you, in some sense, what's our central result in our central conjecture, which is that we find this new stochastic PD. And this stochastic PD sort of connects very nicely with random matrix theory. And finally, I'll conclude with some remarks on sort of the motivation that started us out. OK, so what's a plain tree? It's a rooted combinatorial tree for which the edges are assigned a cyclic order about each vertex. And the standard way of taking scaling limits of such trees is to actually look at their contour functions. So you take the tree, and you build from the contour function an excursion. And you can go backwards. So trees of this nature are in bijective correspondence with such excursions. This thing seems to keep. OK, sorry. It's just a time delay. OK, so the trees over here are really discrete geometric objects. But somehow to take their scaling limits, what we really do is to sort of extend them to, let's say, the continual setting. And so a real tree is a pointed compact metric space with the tree property. It's a little bit hard to absorb at first sight, but somehow there's a very intuitive way of thinking about it. And the way you think about it is that you imagine that you have an excursion as above. And what you do is you use this excursion to introduce an equivalence relation on the interval 0, 1. So given an excursion with F0 is equal to F1 is 0, an excursion means that it's positive everywhere in between. What you do is you use a distance function. So for example, the distance between these two points over here is defined by the values of the function, these two points, and then the min in between. And the equivalence relation is that two points are identical if the distance between them is 0. So given an excursion of this nature, we define a sort of quotient set of 0, 1. And we define the tree Tf to be the real tree coded by F. So the continual random tree introduced by Aldous is the random real tree coded by the normalized Brownian excursion. So somehow this is a natural scaling limit because when you look at the uniform distribution rescaled parts of length 2n, then this converges to the CRT. OK, so the CRT was introduced by Aldous in 93. So somehow this is the framework in which you want to think about taking scaling limits of trees. So what we want are these branch structures in the complex plane, then we really want to be in a framework where we can take continuum limits of these branch structures. OK, let me tell you a little bit about Lovner theory. So I expect some of you have seen it. But the Lovner evolution is sort of a constructive way of building up conformal mappings. And the standard way in which it's presented is that you're usually given, let's say, a simple curve in the upper half plane. And what you want to do is you sort of want to build up a family of conformal mappings of this region with the simple curve in there mapping it to the upper half plane by gradually wiping out the curve upstairs. OK, so the Riemann mapping theorem implies that for each T there's a unique conformal mapping which maps this region, so the portion of the curve, the upper half space minus the portion of the curve over there onto the upper half space. And in order to get uniqueness, you basically normalize that infinity. So that's a standard normalization in this business for some reason. It's called the hydrodynamic normalization. And that number up there is something called the half plane capacity. OK, the real point is that there is a constructive way of building these mappings. So somehow if you know the value of the boundary value over there, u of t, then if you solve this initial value problem, you get the conformal mapping. So the Lovner, I mean, this is the first sort of proof of the Riemann mapping theorem was developed, I think, Gobiad did it in 1905. And Lovner was interested, so sort of Lovner sort of found, let's say, continuous time description of Gobiad's theorem. And what Lovner was really interested in is sort of resolving the questions from univalent function theory. In particular, he's the one who made the Beeperbuck conjecture famous. Now it turns out that there's a general version of the Lovner evolution. You need not restrict yourself to slit mappings, and you need not restrict yourself to just sort of a driving measure, which is one point. In fact, there's a general theory which says that if you have a family of Hugglotz functions parametrized by a positive measure, then you can define a conformal mapping with it. So what's going to happen for us is that what we're going to do is we are going to try to make this curve be not a curve, but to be a tree. So all of my effort is going to be to describe this set, which is going to be a tree. And somehow what I need to do is I need to figure out what the measure that goes with it is. Now there's some very, very mild conditions to get this theory to work, so it turns out that you really need only that this measure, that this family, that you have a family of non-negative Borel measures, and you just need continuity of these measures and some sort of very mild boundedness conditions. But there's a sort of subtlety in terms of you need actually much finer properties of this measure to actually establish finer geometric properties of the conformal mapping. OK, so the standard examples of the Levena theory are the first one is just this sort of point measure that I showed you, something that derives a slit mapping. And there's a sort of generalization of the slit mapping, which is very, very recent. So it's in a thesis just a few years ago. And the real issue over here is the following. So the issue is related to the following. So if you go back to this theory, so you can generate a family of conformal mappings only if provided this measure is just weakly continuous. So it's sort of like you can find solutions to the Levena PDE, and these solutions will actually define conformal mappings. All you require is that this measure is continuous. Now the catch is that what you really want is you want to be able to say something about the geometry of the hull. OK, and here it turns out that you need not just continuity, but you need sort of helter continuity. So the basic condition which is found by Marshall and Rota is that these guys must be helter one-half. And as many of you have probably seen, somehow there's driving measures which are not well a helter one-half. So these are the sort of celebrated Schramm-Löwner evolution, so SLE kappa. And in these cases, depending upon what kappa is, you get sort of completely different geometric properties of the hull. That's really at a threshold. So you need helter one-half to get the Levena evolution. Brownian motion is not helter one-half, but still you can work. And somehow what happens is that the constant in front becomes critical. So the question that we are really interested in is really the following. So which measures will generate embeddings of trees? And how can we get sort of continuum limits of this? OK, so the general form of Levena evolution, so this full measure is rarely used. So one of these is, it's a historical thing. So what Levena was interested in was the Beiber-Webbach conjecture. And for that, it was enough to look at slit mappings. The other one is somehow in conformal mapping theories. It's known that slit mappings are enough in the sense that every conformal mapping can be obtained as a limit in the caratheodary topology of slit mappings. So that's actually bad news in the sense that it says that the continuity properties of Levena evolution are very weak if you look at geometric properties of the house. You have to be far, far more careful than that. OK, so there's a one-line summary of our work. The one-line summary of our work is the following. I'm going to claim that graph embeddings of continuum trees are generated by Levena evolution when the driving measure turns out to being a suitable super process. OK, so this is a measure-valued Markov process. These were introduced, for example. I mean, these were studied extensively in connection, in fact, with continuum trees. And it'll turn out, I'm going to give you an example of the simplest super process in our class. And this is something that we call the Dyson super process. So the reason we choose the name is the following. So there's something called the Dawson-Waternabe super process. It's also sometimes called super Brownian motion. And so we played with either calling it the Dyson super process or calling it super free probability or super free Brownian motion. But somehow, I think this name works better. OK, so to kind of tell you where this is coming from, I'm going to show you a short movie. So this should start. So this is a Levena revolution, which is actually driven by a Dyson Brownian motion with branching. And this is kind of a striking movie that Vivian did after she came back from her stay at Washington. And this is when I became convinced that what we were doing was actually very interesting. So what's going on in this picture? So there's sort of the pure, let me call it the pure combinatorics. Yeah, sure. So what I have is just a binary. So this is just a Galton Watson tree, where I'm going to develop a Galton Watson tree in continuous time. So mother has two children. Each of these children live for a random amount of time. And then these guys could either branch or they could die, again, at random times. Let's see. I have something like this. And so what's going on in that picture is somehow this is just an abstract sort of combinatorial object with a time axis over here. So this is the genealogy of a branching process. And what I'm really doing is I'm going to use the genealogy to obtain a driving measure. OK, so this is going to be really something with spatial structures over here. I have space x. And what I do is that associated to each of these points is somehow what I'm going to do. So at time zero, when the mother has two children, I put down two points over there at, let's say, zero to be concrete. And then what happens is that these guys will evolve by Dyson Brownian motion. OK, so then at the first branching time, this guy dies. And then over here, there's two offspring. OK, so there's a pure genealogy part over here. And then there's a spatial motion. And so in that picture, so it turns out actually. And so the spatial measure over here is what I call mu t dx. And then this guy drives. OK, so is this clear? This is the basic approximation. So there's the Lovner evolution, which is this machine for generating conformal mappings. What has to go into this machine is some sort of spatial motion or some sort of measure. And what I would do is I use the branching process to actually drive this measure. So the key fact over here, and this is sort of like it's a pure conformal mapping theory fact that took us a little while to figure out, is that somehow Dyson Brownian motion is just tried to actually get the Lovner evolution to branch. So it's not obvious that the hull over there is actually going to branch. So people had tried to do this. And somehow it turns out that you have to have exactly the right interaction. And it turns out that sort of the 1 over x repulsion of Dyson Brownian motion or Coulomb repulsion is exactly the right thing so that the hull itself has a branching property. Yeah. Oh, so here, at each spatial time, the equation for these guys? OK, so over here, let me put it this way. It's Dyson Brownian motion on these time intervals. So I have the genealogy, so there's a time axis over here. So initially, I'm starting out always at sort of a singular point. But I have two points sitting on the boundary of the symplex. And then I move forward. So somehow think about it as being Dyson Brownian motion on time slices. So at any time slice over here, so on. So I have a bunch of time intervals. These are given by the branching process. Let me not put in any scaling factors. OK, and these must be continuous across. And then I'm setting, sorry? You're wanting to use your time. Yeah, absolutely, absolutely. But somehow what happens is that I have these branch times. And at these branch times, either remove or put in a particle. And so it does matter that I have. And in fact, it'll turn out in the end, actually. I should be a little bit more careful. So this is the prettiest picture. But it turns out that what we really do in order to prove our theorems at this stage, we're still just doing pure repulsion. But we're going to be in scaling limit where we're doing a law of large numbers. So this guy is actually going to vanish. OK, so the sort of super, the standard Dyson vatanam-based super process basically comes about in the following way. You again, imagine that you have the same genealogy. And you have a bunch of particles, but these particles execute independent Brownian motions. OK, so what you're interested in is a situation where the time steps over here become shorter and shorter. So you have more and more branching. But then you're looking at the continuous state branching process limits or somehow the number of particles that you start with gets larger and larger. So the main claim actually is that somehow there's a nice scaling limit. Now this is part rigorous, part formal. And the scaling limit is actually, at least formally, let me come back to that issue in a couple of slides described by the following stochastic PD. So I'm just going to write this out because it's going to come up. So this over here is space-time white noise. OK, so let me just be again explicit. This is just a formal SPD. So what I'm trying to do is take a scaling limit where I'm taking, you want to imagine this picture being sort of zoomed in to the kind of picture I was presenting with my numerics and then somehow taking the limit, but I have infinitely many particles. And so the way I want you to think about this equation is as follows. In some sense, the way you want to think about it is just by, let's say, analogy with the super Brownian motion or with the Dawson Watanabe's super process. And so the Dawson Watanabe's super process is the scaling limit of branching Brownian motion when the discrete branching process is converged to the fellow diffusion. And in that problem, somehow, the spatial motion of each particle is independent. And so it's described by this SPD, at least formally. And the SPD has two pieces. One piece is pure sort of linear evolution. So it's a sort of linear evolution given to you by the heat equation. And there's a second piece, which is the sort of the branching piece, which is where the noise comes in. In both of these problems, sigma is just a fixed parameter, positive parameter. So somehow the PDE that I'm writing down, so you want to see the left-hand side over here as being, and I think this goes back to an early paper of Boykuleskoum, you want to think about the left-hand side over here as being the sort of free probability analog of the heat equation. So what you're doing is you've got a linear operator, which basically gives you the free convolution with the semicircle law. And on the right-hand side, somehow, you have this noise there. OK? What is what? Sorry, this is the Hilbert transform. OK? OK. So somehow, there's a very important difference with Dawson-Wathanambi. So Dawson-Wathanambi really got in particles doing their own spatial motion. But for us, really, the particles are interacting. And it's a very singular interaction. So it turns out that this is actually not covered by, let's say, standard theorems of Perkins and people like that. OK, so let me give you another way of thinking about this. So somehow, a very natural way of thinking about the evolution of measures on the line is instead to think about the Cauchy transform. And what I'm going to do is somehow our basic object is always this measure, which is randomly evolving, which is driving the Levner revolution. And so based on this measure, I'm actually going to build a family of Gaussian analytic functions with a Bergman type kernel for their covariance. OK, so somehow, the following form, it's sort of like an equivalent form of this PDE. Instead of writing out a PDE for the density, let's see what happens in the upper half plane. And so I'm just taking the transform of that PDE. And what happens is that now truly, I get an analytic function in the upper half plane. And I just have a noise term, which is just the sort of Brownian motion in time. And this PDE, I can actually think about just solving by the method of characteristics. OK, so what I'm really doing, so the left-hand side over here, you want to think about, this is just the equation that shows up in free probability theory. It's just the equation that describes evolution by semi-circular law. But what I'm doing is I'm pushing this. I'm pushing this with this sort of field, which is again determined back again by the rule. And it turns out that somehow, this is intimately coupled with the Lovner revolution. So a very clean way of actually writing out Lovner's PDE is finally just through this OD. OK, so there's this coupling between these two. So it sort of fits very cleanly. That's kind of what I'm trying to convey over here. So H is the sort of Gaussian analytic function, which is obtained again from the measure. OK, so this is somebody can ask a question while I'm figuring this out. OK, so you see what I'm trying to do is I'm trying to find what should be the right limit equation for sort of embedding trees into the upper half plane, OK? Continuum trees. And now let me just sort of comment on this aspect of the stuff being formal. So it turns out there's this sort of this key technical aspect that we can't prove right now. And I was really hoping I would have it by the time we came here, but we don't. The issue is whether there actually is a density. Does this SPD actually make sense or not? And so this turns out to being an interesting question in the case of already the Dawson-Watanobi super process. And over there, it's known that you have a measure, you don't have a measure, depending on the space dimension. So it turns out that when the space dimension is one, you truly do have a measure for the Dawson-Watanobi process as a result of Konoh and Shiga from 88. On the other hand, the dimension is greater than or equal to 2. It's actually known that this measure mu is singular, OK? So we don't actually know what the answer should be. And it sort of comes down to how you think about this SPD. So somehow if I time step, so I do dt rho plus dx. So if I imagine sort of solving just this PD, this actually is smoothing, OK, in the following sense, that if you start out with, let's say, some measure, it could be singular. Let's say you've got two atoms, so let's say this is. So I'm always using, I want to know if I've got a density or not, OK? So what I'm doing is I'm looking at the free convolution. So if I look at, and let me say mu t is equal to mu t, this is free. So this is, let's say, semicircle. OK, so a very basic result of Beyond says that if you just go up a little bit in time, then what's going to happen is that these guys are basically going to become like semicircles. And what Beyond actually gives is he sort of gives an infinity estimate, which tells you how the density of the measure, so it's something like. So the sort of Beyond's estimate goes, OK, let me write it out. So it turns out that there's the following sort of estimate, rho squared. So it's something like this is less than or equal to, I think, it's 1 over d squared. So somehow what happens is that if you think about this equation, so if I drop, if I have 0 over here, what I have is a kind of smoothing term. On the other hand, what I'm doing is I'm sort of adding in fluctuations. So the fluctuations are kind of roughening the measure and the sort of free convolution is smoothing in. And I actually don't know how this balances. And it turns out this is absolutely critical, because for the sort of way we are formulating this problem to have any sort of meaning, what we actually need is that this measure should not have a density. The reason I want this measure not to have a density is let me go back to the very start. What I want to do is I want to solve Lovner revolution. And somehow this is Lovner revolution with this measure. What turns out is that the measure is sort of supported on points that are escaping the upper half plane. So what I want is that I want the set of points. So what I want is I want a tree to be built up. And in order to have this tree to be built up, I need it really to be sort of driven by a singular measure, which is spread out in some place. And we don't have the estimates. So that's sort of the analytical picture. Or let's say the conceptual picture. That was supposed to be more or less the introduction. But anyway, let me tell you more precisely what we can prove, what goes into this proof, and somehow how this actually sheds light on certain other problems. OK, so what I'm doing clear is I'm actually trying to formulate somehow what should be the candidate driving measure for embeddings of trees, of these continuous trees. Now these continuous trees are kind of singular objects. And so it turns out that the measure I need has to also be a singular object. And the singular object, so this measure, is actually described as a solution to an SPD. So that's kind of the way the pieces fit together. OK, so let me actually tell you what we can prove rigorously somehow just at the outset. So there's two pieces. One is a conformal mapping piece, which basically says that the square root interaction, or the square root to T scaling of Lovner revolution is basically perfectly matched with the kind of square root T singularity in Dyson Brownian motion. So that's just a sort of deterministic fact about Lovner revolution. And what I will next tell you is somehow what we can actually prove about the SPD. So roughly speaking, what we can prove is that our sequence of approximations is tight. And the limits of the sequence actually solve not quite this SPD, but a martingale problem that goes with it. And so somehow this picture is consistent, even if it's not fully nailed up. OK, so let's see. OK, let me tell you about the conformal mapping piece. So this is really one of the theorems proven by Vivian. And it's actually, it's got nothing to do with Dyson Brownian motion. It's got nothing to do with probability. It's really the following fact. How do you make a Lovner revolution, which has the property of splitting splitting particles into branches? And the way we'll do it is somehow by thinking about a Lovner revolution, which is driven by a bunch of points. And I'm going to ask for a sort of singular initial condition. So I'm going to take n continuous functions, each of which I'm mutually non-intersecting, except for sort of one index at which these two guys are the same. So I'm going to ask myself, what's the, so I've got two points over there. And I have a double point over here. And I want to know how the Lovner revolution works with that. So let me, let me not give you the detailed derivation of here. What we are trying to do is we're sort of trying to understand. So this is really a pure conformal mapping piece. What we are trying to understand is sort of what