 So it's quite an honor to be at this prestigious conference for Franck Merle. So while we're not really in the same community, I think we know each other since a very long time, since we're about the same generation. And the last time we met was with Yvon, I think a memorable entrecote dinner lunch somewhere in the southwest of Paris. So congratulations. So what I'm presenting is probably not really kind of at the core of the expertise of the people here around. It's about partial differential equations with probability, with so stochastic partial differential equations in the singular case when these equations need to be renormalized. And so I guess some people here because of Bourguin are somewhat familiar with certain aspects of this theory because of my limited background, I will stay completely on the elliptic and parabolic side and never touch the dispersive side. What I'm reporting on is joint work with these three PhD students postdocs at some point in Leipzig based on earlier work with Hendrik Weber, Jonas Sauer and Spotsmith. And let me get right into the subject. So I would like to, for the purpose of this talk, I would like to focus on one very well studied example which is what in quantum field theory is called the 5-4 model. In PDE language, you might say it's an Allen-Kahn equation because on the left-hand side, your linear differential operator is just the heat operator, which for reasons which just have to do with convenient notation, I would write like this. The zero direction is the time like and one to D are the space like variables. And on the right-hand side, you have a cubic non-linearity. It's convenient to put kind of a parameter which of course may have either sign. So in case of the usual, the good sign is the positive sign, a cubic non-linearity. But then the fact is that this is driven by noise. On the right-hand side, which is noisy, which is random, which is very rough. So one easy example, the standard example, although I will deviate a little bit from this in this talk, is to think of this as being white noise. And so some of you might be familiar with this type of counting. The regularity of white noise depends if you try to map white noise onto the helter scale. It depends on the dimension, the higher the dimension, the more irregular it is. And there is something like an effective dimension. If you're in the parabolic setting, you have to count the time dimensions twice. And then it's always fractional. It's over two. So if you're in effective dimension four, white noise would have regularity of minus two. So that means because it's actually slightly below minus two, the solution of the linear problem would already be no longer a function. And even if you take the solution of the linear problem and you cube it, it's an operation which doesn't really make sense because you take the cube of a distribution. So this problem has serious, and it's well known, serious problems. And in fact, it's not just a technical issue. You cannot really make sense of the equations as they stand, and they need to be kind of changed or renormalized. And that's what's done by so-called counter terms. So you kind of change your problem. You change your PDE with the goal of keeping your solution manifold controlled. And you do that by kind of adding potentially new terms to your equation, hopefully as little as possible. You don't want to change your equation too much in such a way that the solution manifold stays controlled. So a typical ansatz would be to put in there all kind of terms which have formally lower order. So since here you have a cubic non-linearity, you would put a quadratic, a linear and a constant term. And since this is a second order operator, you would put a first order derivative here. And then you would have all these coefficients which I here gave kind of somewhat strange notations because in the end I would just keep this one, h double prime h, h prime, h i, summation convention which I'm using here. And the idea is, of course, that you have to choose these coefficients once for all depending on your ensemble, on the probability measure you endow your right hand side with so which describes your noise. And they also may depend on the parameter, but that's it. They should be the same for any solution, any type of boundary initial value problem you might want to ask. And then there isn't kind of an easy reduction if you make kind of very benign assumptions on your noise which would, for instance, be satisfied by white noise like stationarity, meaning that the law of this random field is invariant on the translation. Then you would get, or it's natural to assume that all these coefficients are spacetime constants next to being deterministic if the law of your noise is kind of even in the sense that minus xi has the same distribution as xi, kind of these two terms would drop out. And in addition, if you would have some reflection symmetry in every of the coordinates, which of course you would have if you would assume isotropy in space in every of the spatial coordinates, then also this term drops out so that at the end, perhaps I'm going to use after all a little bit the blackboard or the whiteboard, you would end up with this ansatz for renormalized equation. And so the goal of renormalization then is to choose this counter term, this term h, in a way that the solution manifold stays under control. But first of all, you have to try to make sense of this equation. So for instance, you could do that by, in some way, for instance, the easiest way would be convolution, regularize your noise terms so that you're in the classical setting. And then of course also this counter term should depend on epsilon. And you want to choose this counter term, and it typically has to be divergent, in such a way that you preserve as many symmetries of your original problem as possible, but at the same time your solution manifold stays controlled. It doesn't go away. And in order to make sense of this quest of keeping your solution manifold controlled, it's natural to think in terms of parametrization of the solution manifold, in a sense of introducing coordinates on the solution manifold, and that's what I want to do next. And that's kind of a standard approach in this field. So here again is the problem. So we have the heat operator phi. We have the cubic non-linearity. We have the noise. We have the counter term, which is just kind of a deterministic constant, which depends on the parameter in front of the noise strength. And of course, if lambda should happen to be zero, so if you're in the linear case, then also this h should be equal to zero. So now let's think of a parametrization of the manifold of solutions. And to get an idea, of course, the easiest case is to think of the linear case. So you switch your non-linearity off by setting the parameter lambda equal to zero. Then you have a linear equation, and the only right-hand side term is the noise. And of course, that means that the solution manifold is an affine space. And the linear space, which it is modeled on or which is behind this affine space, is just the space of all functions which are annihilated by the heat operator. And those are analytic functions. Those are very nice functions. So in the linear case, the solution manifold is an affine space of a very nice linear space. Because this linear space is so nice because now we heavily use the fact, at least in this heuristics, that the operator is a nice elliptical parabolic operator. We know that this linear space consists of analytic functions. And analytic functions, in a certain sense, are naturally endowed with coordinates if you kind of write down a power series or if you take the jet or the derivatives of some somewhat arbitrary chosen point, which here I will take to be the origin, which of course our priority doesn't have any meaning. So here we have already found coordinates for the solution manifold if we find one distinguished point on this affine manifold. I'll mention that in a second in the linear case. And then from standard non-linear theory in the smooth case, it's conceivable that this kind of parameterization will survive as you kind of crank on the non-linearity. Of course it will become a non-linear perturbation, a non-linear parameterization. So that's the way you like to think about it. And that's the ansatz you make. And then it's convenient to continue to use these coordinates. And what you end up with is kind of at least a formal series representation of a general solution, phi, which you want to think of as being a function of the parameter in front of the non-linearity and kind of an analytic function or think of a kind of a polynomial in the following way that you want to write your solution as a series in terms of these coordinate monomials, which you get by taking these coordinate functions and building the corresponding monomials. So this would be an expression which is a monomial in lambda and all space-time derivatives of any order of the p with some coefficients which are random space-time functions. So that's the ansatz that you want to write a general solution in this way. Of course this is extremely crazy and this in general, this sum will not converge, but that's an idea which has been for instance used in a successful way in the numerical analysis of ODE's where you also would write down a solution as such a formal power series. So perhaps let me just so that you can look at it a little bit, write it here so you want to write phi at least formally as a sum over all possible multi-indices, beta with coefficients which are space-time functions and this monomial which depends on lambda and p and the definition is that z to the power of beta is 3 c3 to the power of beta 3. So the reason for this strange notation that I use that's 3 here is that it's a cubic non-linearity and that's just kind of we got used to that notation so just accept it for a moment d plus 1 z and dn and zn of p are just the derivatives of p at the origin and z3 of lambda is just lambda. So that's our priority just an ansatz but it is a very convenient ansatz and it helps you to think about the problem in the right way because formal power series have kind of a very nice algebraic nice and simple algebraic properties so that's what I would like to explain on this slide so whenever you have some algebra like the real numbers or a function space you can look at the space of formal power series in a number of variables and here the variables will be infinitely many the z3 and these zn's and so they typically have no reason to converge anyway these are formal variables and infinitely many formal variables but you still can multiply them they're much like polynomials you have the same kind of multiplication rule as if you would multiply polynomials in these variables and of course you have a unit and so if you recall this ansatz of writing the counter term as these formal power series you see that kind of a more kind of compound or more compact way of thinking of the counter term is in terms of these coefficients c as denoting a formal power series in the z3 variable and these space time functions pi as being a formal power series in these infinitely many variables with coefficients which are in the algebra of space time functions let's say for the moment smooth space time functions and smooth and random space time functions of finite moment so that's a convenient way of looking upon this because then you realize that there is a simple translation of your original PDE in terms of these power series formal power series so in terms of these coefficients which now you accumulate into this object well I shouldn't use this one which you accumulate in this object pi so pi is now something which is in this function space and lives in these variables then this PDE turns into kind of something which still looks very simple and compact so you apply the operator L to pi which of course means you apply just to every coefficient individually and this should be equal to z3 which is the place holder for the lambda pi to the power 3 which obviously corresponds to the power 3 here plus c which is the place holder for h times pi plus the noise times the unit in this algebra and this I will often call pi minus so and now once you're on this level in fact things now make sense rigorously term by term so you can write from this identity you can kind of deduce a hierarchy of equations which at least as long as size smooth as long as you have the regularization on make perfectly sense and certain conditions have unique solutions so in a certain sense this hierarchy of equations gets anchored by looking at its zero by its kind of zero component in which case only the size survives so the pi zero is nothing else than the solution to the linear problem and in fact by our assumptions on the noise there is a unique stationary and centered solution of this equation if you look at multi indices beta which are in a certain sense unit vectors which just put a one into one of these coordinate functions and zero else then from I mean you feed in the standard polynomials then you see that not all multi indices are populated but there is a certain population condition and here is an example of the kind of the first four let's say multi indices and in fact in terms of the right hand side so if you want to compute the coefficient of the multi index which just puts a one in the z3 variables you recover kind of the cube of the linear solution plus the counter term for this one you get three times the linear solution to the square plus the bare counter term and so on and you see that eventually pretty soon these expressions get more and more complicated so it's convenient to work and to think more on this abstract level than to try to write down all this entire hierarchy so it's kind of a simplification in terms of thought and so in the context of the solution theory we introduced this way of thinking in terms of multi indices we introduced this way of algebra in an earlier paper in a context of a different type of stochastic partial differential equations which are not semi-linear but have more of a quasi-linear non-linearity which is quite active in terms of singular quasi-linear PDEs and there is in fact an interesting algebraic aspect to it we wrote a completely algebraic paper on what's called the structure group which arises from this type of approach which then was also taken up recently by Ivan Vounet and his PhD student okay so but now let's go back to this question of can we show that the solution manifold as we now defined it at least term by term does it stay controlled if we let the regularization go to zero do these coefficients stay under control and kind of what gives you the right idea is the fact that there is also scaling in this problem there is a scaling transformation which acts on the non-linear solution space and in a certain sense whatever you want to do you want your renormalization to be compatible with that scale invariance this action of scaling on your solution space and of course the starting point of the scaling is a parabolic rescaling of space-time by some factor R as indicated here and then you don't have to read the details there is a certain way on which the scaling operation acts on the right-hand side on the solution, on the parameter lambda on the counter term so that the form of your equation stays preserved that's not a big mystery and now you want your parameterization of the solution manifold to be compatible with this action and if you do that you see that also these coefficients should satisfy a certain scaling law in terms of a number which is typically called homogeneity which you can attach to multi-indexpeda you don't have to absorb the formula it kind of comes out by one of these typical scaling arguments pretty easily and so that's a structure an additional structure which you have whatever you do should be preserved why? because you want to eventually use it once your renormalization goes to zero you want to use it for noise which has a scaling in law so white noise has a very specific scaling variance in law under this type of affine rescaling and let's think of something which is even more general than white noise so the Gaussian noise which has a Cameroon-Martin space which is not given by A2 which would be white noise but by a fractional Sobolev space characterized in the parabolic sense characterized by the exponent alpha then it is well known that this noise is invariant in law under this transformation if you choose the scaling parameter s to have this value it's essentially minus the effective dimension over 2 we're back to what I explained at the very beginning the notion of an effective dimension which in case of the parabolic problem is the space dimension plus twice the time dimension plus alpha which is kind of the exponent which defines your Cameroon-Martin space or defines your homogeneous fractional parabolic Sobolev space so whatever you want to do is you should do it in such a way that your construction, your puncturization preserves the scaling so you also want pi what's in the jargon of regularity structures called a model so you're kind of accumulated coefficients in your representation you want them to inherit the scaling in law and then what you infer from this is instantly an estimate on how these coefficients should be estimated so if you look at a certain stochastic moment to the power p of the coefficient pi with the multinext speeder and you convolve it you average it on a parabolic scale r then this quantity should behave like r to this homogeneity and note that this homogeneity in fact might be occasionally zero negative it's not a positive number so in that sense the notation of using kind of absolute value bars might be a bit misleading so therefore you also need to express this in terms of convolution and not in terms of the Header type of increment so that's what you want and let's let me again make a little bit of examples let's look at the case of four space dimensions now we all have heard of Eisenman and Dumineve-Coupas result that 544 is trivial but it's much easier and non-trivial if your noise is not white noise but it's slightly more regular so it's Cameroon-Martin space is an H.alpha with a strictly positive alpha in this case you see that you need alpha to be positive in the four space dimensional case in which case the effective dimension is 6 because otherwise and that's the kind of typical what's called the dimension counting problem under this rescaling operation your non-linearity would not become smaller but it would become larger as you go down to smaller scales so that's an easy kind of more hand-waving way to see that in this case of four space dimensions alpha needs to be positive for this problem to have a chance to be well-behaved so here are kind of a couple of first exponents, multi-indices and they're homogeneities and you see that some of them will have definitely negative homogeneity many of them might actually have negative homogeneity and some of them have kind of positive homogeneity but however the homogeneity is not to be confused with the regularity of these coefficients the regularity is typically worse because these coefficients, these random fields for most multi-indices, beta are not statistically stationary so that's something different for instance you can see that by looking at this multi-index which has a homogeneity which is even larger than one but still the right-hand side is as rough as the right-hand side for this thing here which has a homogeneity of two alpha so the bare regularity of this one is not one plus two alpha but just two alpha so the homogeneities give you the scaling but still the objects could be rougher than the scaling indicates and here is the main result so we can exactly prove the estimate which is predicted by scaling provided we're looking at the corresponding Cameroon-Martin space and in fact the condition is more general so we don't need to work in the Gaussian case what we really need is what's called a spectrogap estimate so we could also work with more general ensembles and I explained what the spectrogap estimate is in a second so the statement is that provided we have an ensemble, a noise ensemble which I always will use the name E4 which satisfies these symmetries which I explained in the beginning so that was these translation symmetry in law the parity in law the reflection symmetries in law so that's what I mean here and provided it satisfies a spectrogap estimate with the Cameroon-Martin space which is given by a fractional holder a fractional subolive space of a positive exponent which has to be irrational and that's a subtle condition which comes in from the fact that the Schauder theory doesn't hold in the integer case and if the alpha is rational you might potentially have integer cases and that's always bad for Schauder theory in this case you would expect to see logarithms in the scaling which of course is an interesting thing but which we haven't tackled yet so if it's positive and irrational you get exactly the scaling which you can predict by the simple hand-waving argument which I've shown you two slides before and here again is I explained the meaning of what means so you take this beta component of your formal series representation you convolve it on scale R with some once chosen Schwarz function which is parabolic you rescaled in this way and then you evaluate at the origin which was the point we singled out and that object has the stochastic moments which behave as predicted by the homogeneity and that estimate is uniform in the regularization and in fact you can even give a sense to the problem in the limit and there is a uniqueness as worked out by Temple Mayer recently and so what does it mean to satisfy a spectral gap that's if you want an infinite-dimensional Poincare inequality a Poincare inequality on probability space so what it does for you is if you're given an observable or a functional of the noise some general nonlinear functional of the noise then which you can also interpret as a random variable then the variance of this random variable is estimated by the expectation of the square of a gradient and this gradient is the functional derivative of your Fréchet derivative which is a linear form of your random variable you can at least make sense of this in the case of cylinder functions that's how you kind of go towards my Yavin calculus and the way you measure this you measure the size of this linear form is by the norm that's dual to the H.alpha and H.alpha here I wrote it down again for you because of course you have to kind of define it with the right parabolic scaling so one convenient way of doing that is and we're also inside the proofs we're using sometimes the semi-group generated by this by looking at the square of your parabolic operator which is positive definite then taking the fourth root which gives you something which is like the derivative and then the alpha power and then the L2 norm so that's the condition so that's what we impose and again it's satisfied for Gaussian measures which as many of you will know I just characterized by a single Hilbert structure and the Hilbert structure is kind of to be this fractional supportive space so now in the remaining time I'd like to tell you a little bit of what goes into the proof and connect it to existing and kind of later work so the kind of as some of you will know what I'm presenting here is in a certain sense very much inspired by Haierer's regularity structures and these types of stochastic estimates on what in the language of regularity structures is called a model kind of are the starting point for regularity structures I mean the kind of the spirit of regularity structures is to separate on the one hand a stochastic estimate of the model and then a completely deterministic or path-wise solution theory of your SPDE and in fact this on the level of regularity structures the type of result I'm presenting is kind of in this major work by A.J. Chandra and Martin Haierer which is not published but which is existing since quite some time on the archive with this title A.J. Chandra gave a course in Leipzig on this a couple of months ago so it's a kind of a very math-physics approach to these estimates so you try to kind of estimate cumulons, you write down the index set in this case are trees so you let to Feynman diagrams it's an extremely combinatorial and deep work and quite different from what we did but that's not the only approach to these stochastic estimates so there is also kind of an approach which goes via ideas of renormalization group and in particular a continuum version of the renormalization group which goes by the name of Pulsinski, a flow started by Kopjainen but then kind of very recent result by Pavel Duch who was actually post-doc in Leipzig which very recently has been taken up by Gubinelli and a PhD student and in fact Hira and Steele recently used our spectrogap approach and our inductive approach to extend it to the tree-based framework which is typically used in regularity structures so there is quite some activities in different ways of doing it and now in the remaining time I would like to explain to you why this Maliavain calculus, so spectrogap and Maliavain calculus so taking the derivative with respect to the noise is something very natural in this problem and so of course I want to advocate this and it's also very geometric and very analytic as opposed to combinatorial so this is one slide where I would like to explain why using this Maliavain calculus is a natural and good idea here so but let me just introduce the notation so if you haven't heard about what Maliavain calculus is there's a very easy way to think about it is just taking the derivative with respect to the noise and if you're more of an applied mathematician it's nothing fancy you have the right-hand side and now you perturb the right-hand side psi by some del psi and then you take the derivative with respect to this infinitesimal perturbation so in a certain sense you linearize your problem so it's nothing else than linearizing your problem in the right-hand side and what you gain is that you may assume that this infinitesimal perturbation of your noise the noise is very rough but your infinitesimal perturbation now is much smoother it's in this fractional Sobolev space with more positive regularity on the level of L2 whereas the noise itself is rough it's just a distribution in our case of order even worse than minus 3 or in this case slightly better than minus 3 so you gain 3 derivatives by replacing an instance of the noise by an element of the Cameron-Martin space so this is why it's analytically smart you suddenly redefined your problem and you replaced something rough by something much more regular but it's also conceptually smart because the problem is really the non-robust part is really this relationship here between the right-hand side of your equation and the left-hand side, this algebraic relationship we have a division of the problem into a linear differential relation which is benign and a nonlinear algebraic relation between pi and pi minus and here is sitting something that's divergent so that's a bad way of writing it so the hope is, if you derive it if you take the derivative of it it will get better and why is that kind of a priori not a bad hope because the counter-term vanishes under the Magiavan derivative because the counter-term is deterministic the derivative with respect to the noise it's zero so it looks like a very natural idea and here is here is on kind of one slide or in two lines this non-robust definition of the right-hand side as it relates to the left-hand side the algebraic relation between pi minus and pi and here is its Magiavan derivative where you just use Leibniz's rule and but now you see that you haven't solved all problems because you still have the noise here so it's not it's not gone so the C this divergent term is still present in this relationship between pi minus and pi even on the level of the Magiavan derivative and moreover while it looks like that you have gained a lot of kind of regularity in fact you haven't gained any bare regularity because this delta pi minus is still dominant the regularity of the delta pi minus is still dominated by the worst part by the pi square here so it's in general not in this in this sobolev space so it's a bit more subtle it's very clear that the basic idea makes sense you have to work a bit and and again in a certain sense symmetry comes to your help and in order to explain that on the slide after I have to tell you about what's called in the framework of regularity structures the structure group which in our way of seeing it is in fact something very simple so remember that at some point we took the origin somewhat arbitrarily there is no origin in time space so in principle the pi which we defined or constructed based on the choice of one point we could also do it with any other point so in fact we don't just have a single pi but we have an entire family of centered models indexed by spacetime points clearly there should be a relation between these because they serve the same purpose they serve the purpose of representing the same solution so we can now represent the same solution in different ways once based on the model on the pi with respect to the completely arbitrary Eurocentric origin and the other one with respect to some other center and and now so that defines kind of a non-linear map on these parameterizing polynomials and now the idea is to algebraize like we did in a certain sense here to lift this non-linear map by pull back to a linear transformation on the space of formal power series automatically it will be multiplicative so it will be an algebraic and in fact so you can this change of coordinates you can implement in a completely algebraic way with half of these change of base point transformations and I'm using like always the notation of higher the gamma x and here is an example of one simple transformation in terms of its matrix entry with these two multi indices and you see that it typically arises from the model itself evaluating the model taking kind of derivatives of the model in one of the two base points and in fact we also estimate as part of our result we estimate these these change of base point transformation so there is this other kind of nice structure to the problem that you can you have these change of base point transformations and that gives you the right idea of how to think about the Malia van derivative because so the basis was to say that now we fix some base point x the basis was to say that the pi x provide a parameterization of this non-linear solution manifold but if they provide a parameterization of the non-linear solution manifold they also provide a parameterization of its tangent space in a certain space so in fact taking the derivative the former derivative with respect to the coefficients of this polynomial entry which means just taking the derivatives with respect to the Zn's spans the tangent space then to try to express the Malia van derivative which is also something like a tangent vector in terms of these tangent vector fields to your non-linear solution manifold and that's exactly the idea the idea we're using we're going to write the Malia van derivative of the model in terms of an appropriate linear combination of these tangent vector fields and we only need finitely many there because we can truncate and for being able to truncate it's good that one can localize and then these objects here which we call the gamma have again a nice algebraic structure they act a little bit like well I mean not quite like derivations because there is the gamma itself in it but they act like derivatives of algebraic and morphisms and then with this way of thinking we can now implement we can kind of find a better relationship between the pi minus and the pi on the level of the Malia van derivative with help of this new object which ties del pi to pi very much in the same way as it ties del pi minus to pi minus and that's now the way how to find a more robust representation of this non-linearity and on the level of the Malia van derivative and kind of this infinite dimensional geometric idea of representing representing the Malia van derivative as a tangent vector field to your solution manifold How much time do I still have? Okay, yeah, that's certainly okay Okay, so so that's in a certain sense the structural idea of what we're doing and now of course in the end it's estimates and so so that's just one slide on kind of two more aspects of the or three more aspects of the proof so we run an induction it couldn't be an induction by homogeneity because this new object doesn't have the right triangular properties with respect to homogeneity in fact we have to run a different type of induction it's kind of it's better to think in terms of three finite loops of the induction which you run in terms of how often you use the non-linearity in terms of the homogeneity in the noise which is this expression and in terms of the what's called the decoration in terms of polynomials which is this expression and then another good idea to never lose like in Kolmogorov's criteria a little bit in your exponent which you don't want to do because then you violate scaling and scaling was such an important guiding principle it's convenient to work with what we call annealed norms which means you put the probabilistic norm inside and the spatial norm or the spacetime norm outside so that's that's exactly what we're doing so here is one sample so this is the way how we tie the Maljavan derivative of pi minus to pi minus itself via this via these tangent vector fields we want to look at kind of a convolution we want to probe it by convolving it on scale r evaluating at this point x we need to average over the point x that's what we need in order to pass from this L2 based topology which is important for Maljavan calculers to the more held up based topology which is important for anything else and then we estimate this by kind of an annealed version of the Cameron-Martin space norm which is here and we get a positive exponent in this convolution scale which is crucial we get a negative exponent in this localization scale but we don't care at this stage and we get the right homogeneity in terms of beta and in the end we're using very much inside the proof we're using very much the tools which Hira kind of propagates for the solution theory which go by the name of reconstruction and integration so we really leverage these tools in a place for which they hadn't been initially initially used okay so I think I'm done so I know that this subject is not really something that's much done in this community but I thought it would be perhaps interesting because I know that this community is very geometrically minded and things in kind of geometries of solution manifolds and I think in that sense there is a connection between what you're doing and what I'm presenting here and so of course what I like is that I don't know if some of you have been you know have encountered regularity structures and the principle you know the first thing you see is trees right I mean they give you cherries, tripods and then you know there's an entire zoo and it becomes very combinatorial and but all the notions live as well I mean all the set of regularity structures can be used in a much less combinatorial much more geometrical and analytical way in if you go this way and I think it's as performant if you kind of replace the combinatorics by this more geometric way of thinking okay Questions? Can I ask a very nice question from a this equation is also the equation describing the motion of the domain world for example so can you deal with period functions instead of the phi cube by the same technology or is it completely cosine phi instead of I think they're in this case the scalings are different and I think they're much closer to critical or there's a parameter in these equations which governs how close to criticality in the sense of non-renormalizability they are so I would say certain aspects of what I'm presenting here are easily transferable to other problems but then it probably produces different and probably larger challenges which have to this is not what I'm presenting here is not meant I think as much as regularities are not meant as regularity structures are not meant as something by which now you can solve everything but more kind of a set of mind or tools or way of looking upon problems which help you to solve them and require their individual ideas depending on the problem Any questions? So if you can answer the range of applications to say it's comparable to the more combinatorial approach this could be larger No I wouldn't think it's larger but so I mean we started we started getting into this because we wanted to be able to deal with certain types of equations which are not semi-linear which have kind of a stronger non-linearity and that's where we developed these things and but then Hairein-Garanzer showed that this can also be in a sense done within kind of the more standard tree-based setup I think I mean if at all my hope would be that certain things become more transparent I mean sometimes in the existing work there are miraculous cancellations or certain symmetries which you discover at the end because coefficients in front of certain trees cancel and I think this approach puts things together in a more natural way in a sense in the more parsimonious and therefore perhaps the hope would be it makes some of these problems clear but more general I mean larger yes so for instance the scaling symmetry in a certain sense we postulate it and in this so I mean in the more standard approach the more standard approach kind of goes as follows so you have the right hand side of your equation or you have your problem and it comes with certain operations like cubing like taking the cube and solving the equation or more complicated things and then a tree encodes a certain order of operating of performing these operations and in a certain sense it's much more ambitious by trying to describe all these I mean all these concatenated operations in a way so this is why I would call it button up whereas this approach says let us start from the solution manifold let us not try to describe any possible picard iteration or so but we have a solution manifold and let's leverage on the symmetries scaling symmetries of the p-translation bands of the solution manifold in building these objects that's more what I would say top-down just a naive question because I see that for me you are more specific of trying to you can be in such a critical situation no, no, no criticality we're not in the we're always up to critical case so when I was saying D equal to 4 I was cheating in the sense that I took a noise which is slightly more regular than white noise D equal to 4 would be critical for white noise and here like in the standard regularity structures we can only do things which are close to critical but not critical it doesn't add anything to that okay the nonlinearity would always have to be not so the heuristic for the heuristic it's very helpful in the end our primary application was a quasi-linear equation of this form and while in order to get the right ideas it's good to think about A being analytic in the end for the final theory for the solution theory you need a high but limited amount of regularity on A and how much regularity on A you need depends on how far you're from being critical so it's more if you think about these coordinates if you think in analytic terms but then in the end for the estimates you don't really use that so much