 Dear Dirk, I wish of course a happy celebration of your 60s during this week and I really want to congratulate you for your great discoveries. So this talk will be indeed, you know, a great occasion for me to pay tribute to Dirk whose encounter was a key turning point in my own understanding, I mean, both I would say physics and mathematics and the relation between both. And I mean I will start, you know, by saying that I've always been fascinated by the courage with which physicists and seemingly intractable mathematical problems. I mean the one I would discuss today would be renormalization and I mean it would become clearer and clearer what I am trying to emphasize. So I would start from the, you know, the birth of quantum field theory, which after Planck, of course after Planck's discovery in 1900, I mean there is a clear statement in Einstein, a paper of 1906, that the energy of an oscillator can take only those values which are anti-German preservatives. So this is quantization, it's quantization which is sort of a wishful thinking and it was put on solid grounds by two papers. I mean there is a paper of Born, Eisenberg and Jordan, I think in 1925, I'm not completely sure. And then of course the paper of Dirac in 1930. So what is done there is something really quite amazing that, you know, you want because of this statement of Einstein, I mean Einstein, you want kind of very strange condition on a complex number. You want complex numbers then belong to complex numbers, but you want to subject them to the condition that the absolute value square is an integer. Now as it stands, you know, it's really something which looks totally impossible, totally unnatural, but not. But this is what you want for all the coefficients which will appear, you know, in the Fourier expansion of a wave. Okay, so the amazing ideas, amazing ansatz, which comes from both papers. I mean in Born, Eisenberg, Jordan, they studied the oscillator and they found the corresponding operator. And then Dirac used it in second quantization, in the first example of second quantization. And what is a Meierker? Meierker is that if you take not a complex number, but an operator. And this operator, you know, it's like if Z was not commuting with Z bar. So I mean the replacement of Z bar is the adjoint of the operator. So you have two operators, A and A star are adjoint of each other. And they fulfill the condition that their commutator, A A star minus A star A, you know, is equal to one. Okay, this is extremely simple. And adjusted by its, by this formula, it immediately implies that when you take A star, which is like what if you solve Z square, it will be an integer. The reason is very simple, you know, the reason is that in general you have spectrum of A B is equal to spectrum of B A, except possibly for presence of zero, the point zero in the spectrum. And so, I mean, if you have, you know, this relation, this means that you can descend, you can descend, descend from an element. First of all, the spectrum is positive because the operator is positive. And if you take a number which is in the spectrum, you know, you will descend it by one. So provided you, and the only way in which you can descend to something negative, which is, of course, absurd, is that you land on zero. So necessarily the spectrum is formed of positive integers. Now, with this answer, you know, Tiraq was able to prove physical, I mean, mathematically speaking, the formulas that Einstein had guessed by sort experiments about the constant A and B coefficients of absorption and emission of radiation by an atom. So this was a fantastic success, 1930, and it was really the birth of quantum field theory. Now, you know, I want to show you a kind of a joke, but which is very, very comforting somehow, you know, so this is this, I don't know how real it is. I mean, I don't know if Einstein really said that, but we don't care. Okay, so what he says here is that do not worry about your difficulties in mathematics. I can assure you that mine are still greater. Well, you know, this is quite an amazing statement. Now, indeed, I mean, you know, the mathematical difficulties which you reach extremely soon when you handle quantum field theory are encapsulated by one formula, which is you do Feynman. You know, there is a saying in Mon physicist, probably of the past, which was that, you know, Schringer brought quantum field theory to NART and Feynman brought it to the masses. Well, the reason why he brought it to the masses is that, you know, you have something principal, which is incredibly simple to formulate, which is that the probability amplitude, remember that probability amplitudes are like square roots of probabilities, probability amplitude of a configuration is given by this formula, which is the imaginary exponential of the action in units of h bar. Well, of course, the action is defined like this. So, I mean, this is extremely delicate because, you know, you will write functional integral. And if you would write this functional integral in Minkowski space, not in Euclidean, then you would immediately meet the difficulties that propagate or the singularity, and you don't know how to handle this. Now, it is handled by what one calls the Feynman I epsilon prescription. But that means essentially that you are passing to Euclidean. And so, what you do is you compute if you want the source, so you compute the functional integrals. And what is sort of coming out very quickly is that, first of all, you don't know what is this integration measure at all. And the only thing you can do is actually take the free theory and free field and perturb around the free field. When you do that, you have to integrate by parts and the Gaussian, which, okay, anybody can do. And then you get some expressions, which are giving you the perturbative expansion. But what you find out almost immediately is that when you go beyond the three levels or beyond the level at which Dirac was working, you find the integrals that you get, the expressions that you get, are in fact divergent integrals. So, I mean, on the face of it, you get something which is meaningless. Now, this type of meaningless result has an old ancestor, actually. And this old ancestor, I remember a talk by Sidney Coleman in 1978, in which he was giving an example, which is a slight variant of the following. I mean, he was giving the example of a balloon filled with helium. And you compute the initial acceleration of the balloon when it's sort of left going up. But you find something ridiculously different from the observed. And I mean, you can also take an example of a ping-pong ball in the water. And so the Archimedean principle, the fact that the force will be corresponding to the volume, to the mass of the water that goes up and so on, doesn't work at all. It gives you a result which is in contradiction with experiment. And I mean, it was observed by Green in 1830, actually, that there is a beautiful explanation to that. And the explanation is that when you compute actually the mass that should enter in the Newton law, you find that it's not the original mass, M0, if you want, that you would have for the balloon or for the ping-pong ball. But you have to add to this a correction term, which is actually one-half of the mass of the water contained in the ping-pong ball, if it were in water, or the mass of the air and so on. And what you find out then is that the initial acceleration cannot exceed 2g. And I mean, the reason behind this is that the ping-pong ball or the balloon is actually immersed into a fluid. And in the motion, what happens is that it creates a disturbance in the fluid. And when you compute the energy of this disturbance, this actually adds an additional term to the effective mass that you are handling. So somehow, in the case of the balloon or in the case of the ping-pong ball, you can actually find out what is M0. What you do is you take the ping-pong ball, take it out of water, and that's it. Okay, you can wait. But what physics understood very, very soon is that this is not the case for the electron. Because if you have the electron, you cannot put it out of the electromagnetic field, whatever you do. So you will never be able to find what is called the bare mass, for instance, of the electron. So this resulted in a lot of fight, a lot of sinking, and amazing, how to say, development, which is called renormalization, and which led physicists to slowly understand what was going on. As I said, of course, in the end of Schringer, Feynman, Dyson, and then Broly-Bov, Parachukheb, and Zimmerman. And they came up actually with a very good way. In fact, I think this is actually due to Tuft and Weltmann, which is in order to understand and get rid of these divergences from the physical principle that, you know, for instance, the bare mass is different from the effective mass and similarly for the charge, similarly for the field strength. You have to use the regularization process. So the actually most efficient regularization process is what is called Dimmreg. So the idea of Dimmreg is simply this formula. So this formula tells you that if you have to integrate a Gaussian in D dimensions, you don't have to worry whether D is a integer. You can put a definition and this definition is that the integral of this Gaussian in dimension D is given by this formula. Now what you do is you take one of these divergent integrals you are getting from Feynman graphs and you handle it by passing to what are called Schringer parameters. Namely, you rewrite it. You know, you rewrite the integrand as a sum of Gaussians, Gaussian expressions. And then you compute, okay, you compute with this. And I mean, when you compute with this on an example, you can, you know, find out what you get. And typically what you will get, you know, in simplest examples will be gamma functions. And these gamma functions will have, because of the divergence of the integral, the bad taste if you want to have a pole at the dimension you are interested in. For instance, you know, if you are in dimension four, then this expression when D equals four, it will have the pole of gamma at z equals zero multiplied by something that you can compute. So what the physicists have invented, you know, so all these years and our fight and our understanding and so on is a process which is a process which is combinatorial, which is called minimal subtraction and which allows you at the end of the day to get a finite result. So first of all, you have the preparation. And this preparation comes from the fact that, you know, you have to take into account when you work with IO loops and so on, of what you did before. So these are called the sub divergences. And you have to prepare, you have to prepare a graph by accounting for the terms that you had computed before by a certain formula. So this will yield to you will give you what are called the counter terms. In these counter terms, what is really important is if you want to take the pole part, so T is taking the pole part, okay, the part which is, you know, the relevant part. So, I mean, of course, not only one epsilon, but one over epsilon square and so on and so forth. Okay, so you take these counter terms and you define the renormalized value by minimally subtracting if you want the diagonal terms. So the renormalized value is given by this formula. Okay, so this is a combinatorial recipe. It's very complicated. And when you see it as a mathematician at first, you say, okay, well, it's hopeless, you know, because, okay, I mean, one understands why in physics, you have to do that. But mathematically speaking, you know, it's very difficult to imagine that this could have mathematical meaning, not that it's not rigorous, it's perfectly rigorous. No, but, you know, I mean conceptual. Okay, and this is what we found with the timer in our collaboration. And I mean, we began, I think it was in 1998, we began to work together in 1998. And the idea, the key idea came from there. The idea of there was that, you know, when you look at these graphs, in fact, at first, he was working with rooted trees. There is offerable structure behind the scene. Now, at the time when I met there, I was working with only most of the chiefs. And we also were working on offerable. So I was sort of, you know, perfectly ready to absorb the discovery of the idea. So it turns out that when you formulate this offerable in terms of graphs, it's a beautiful saying, namely, you know, you have a graph, so you take the free, you know, commutative algebra generated by graphs, so you take, you know, linear combinations of graphs and so on, products, formal product, and you define a good product. And this good product is sort of, you know, specified on the graphs, which are one particular, you see, but by this formula. Well, these are the sub divergences, I mean, are the sub graphs, if you want, which would be corresponding to sub divergences. So you define this formula, play with it. And I mean, it's quite amazing that, you know, the co-product that you have defined like this is actually co-associative, you know, it's a morphism of algebra and so on. And I mean, you know, this co-product goes from H to H stands for H. And I mean, after when you think about it, after the fact, you know, after a long process, you find out that the right analogy between what is the kind of offerable, this type of offerable, which is commutative, but not co-comutative because co-product is not co-comutative in general, because you have term like this. I mean, it's underlying a group, a formal group. Now, the clues you can think of, which is in fact very, very closely related to that one, is the offerable that you would get if you look at Taylor expansions of differ morphisms. So in fact, you know, the correct name, which we put up was differ graphisms because of the graphs. So I mean, this is the way you have to think about it. You have to think that it's underlying a group. This group is a composition of things which are like differ morphisms, and they are given by their Taylor expansion, which corresponds to the perturbative expansion. So there's this co-product. And now, the great discoveries that we made, I think it was either in 1999 or forward in 2000, but this was absolutely a fantastic moment, is that in fact, this procedure, this combinatorial procedure of physicists, and in fact, nothing but something which is known mathematics and which is related to a geometric problem. And this geometric problem is the problem of understanding the bundles on the human sphere, P1 of C. And in order to understand such bundles, what you do is they are given by a gluing data. Gluing data, because if you want, if you look at the, I'm talking about all morphine bundles. So because if you want the part which will occur on the upper hemisphere or the lower hemisphere, okay, I mean, these parts will be easily understandable. So, but the part which is non-trivial is how you glue them. So a lot of work was done in mathematics and that, and it turned out that in physics, because of the nature of the problem, which is a perturbative problem, instead of considering a bundle which has values in a group like GLNC, so to be a vector bundle with values in GLNC, this is what gotenic was working on, instead, we shall be working with bundles whose structure group is a pro-unipotent group. Okay, so it's a pro-unipotent group, and that makes things much simpler in the sense that you don't have, if you want global obstructions to trivialize a bundle. But I mean, when you compute, when you understand, okay, what does it mean that you sort of trivialize a bundle, then you apply it to the following loop. You see, when we're talking about the operable, it turns out that when you look at the values of the graphs, when the dimension is not the dimension, which is critical, you know, like n equals four and so on and so forth, well, then you can give it a meaning. And this meaning, what does it tell you? It tells you that what you have is you have a loop gamma of z, okay, which is with values in this group attached to the operable, but this gamma of z is like a gluing data, and you don't know how to evaluate it at this dimension d, because there, you know, it's singular. So what you do is you apply the method which allows you to trivialize this bundle, and this method is called Birkoff decomposition, and what does it do? It writes this loop with values in the group, the group is highly non-Abelian group, it's not committed in group at all, but you write it as a ratio of two loops, one which will be quite singular, but one which will be perfectly regular in this C plus, which is this gamma plus of z. And the amazing result that we proved with Dirk is that when you look at the mathematical uniquely defined, if you want, Birkoff decomposition of the loop corresponding to the datas which are computed by a dim reg and so on, then you find by induction that it's given by this formula where t is the same as before, it's the extraction of the whole path, okay? And so amazingly, I mean this was an amazing moment that this process exactly coincides with the recursive process with the combinatorial recipe that was given in minimal subtraction, okay? So it makes translation from one to the other. So this was, you know, an absolutely key moment, and why? Well, what does it mean? It means that one has, you know, a conceptual understanding of this recursive process of physicists. So in other words, you know, we have a unique metamorphic map which goes to this group associated to the alphabet, okay? And when you take now the renormalized values of an observable and so on, what do you do for the dim reg plus MS scheme? Then what you do is that you ignore the divergency by replacing gamma of 0 by gamma plus of 0 in this non-community decomposition process, which is the Birkhoff decomposition. So what does it mean? It means, you know, that if in the middle of the night, somebody would come and put a gun on your head and would tell you what is renormalization, this would be my answer. My answer would be, okay, well, look, I mean, you know, it's a Birkhoff decomposition of the loop and you take the part of the loop which makes sense and you ignore the other one, okay? So now it turns out that, you know, there is much more to that. There is much more stuff behind the scene in this data. And what is behind the scene is related to Galois theory. I will come to Galois theory much later. But somehow I will now describe results which were obtained, you know, in collaboration with with Martin Marcody. And I mean, before I do that, I would like to say that, you know, the work that we did with Birk, we had as a corollary of what was going on, the way that the group was acting on coupling constants. Namely, there is a natural morphism from the group associated to graphs to differ morphisms. As I was saying, you know, this group should be sort of differ graphism. So it's related to differ morphisms. And the way it's related to differ morphisms is by the way it's acting on coupling constants. So it's acting on coupling constants by, if you want the image of this scene. But because the Birkhoff decomposition is sort of a factorial, what happens is that you can get, if you want the, what is the effective coupling constant from the Birkhoff decomposition. So this is, I mean, you can get, if you want the finite, the renormalized coupling constant from the Birkhoff decomposition. So this is the corollary of what we had done before. So as I said, you know, I continued working on this and in what we had done with Dirk, we had understood the renormalization group. And this was coming from essentially the fact that, you know, when you look at dimensional analysis, when you do integration in dimension D minus Z, you have to introduce a dimension full parameter, which is, which has the dimension of a mass, which we call mu. Okay. And we should put in formulas. And the amazing fact, which is a fact of life, you know, is that when, which is known before, which was known before, is that when you take the negative piece, now we say in the Birkhoff decomposition, but okay, I mean, physics terms, you know, this was in the DPH state method, then this negative piece in the Birkhoff decomposition is actually independent of mu. So from that, you know, there is a one parameter of subgroup of the group associated to the alpha, which appears completely naturally. And, and now what in my work with Martin Mercoli, what we did was to, if you won't understand the link between all these facts that I mentioned before, and Galois theory, I mean differential Galois theory, but differential Galois theory in a situation which is much wider than when you look at, you know, the Picard-Veciot theory of differential Galois theory for regular singular differential equations. And I mean, fortunately, you know, the Picard-Veciot theory, which was, you know, very beautiful, which applies very well for regular singular differential equations. After a long time, you know, for a long time, it stayed a little bit silent because, you know, there was an essential result, which was that the Galois group was the Zariski closure of the monodomy. But then under the hands of Martin Hermes, Malgrange, the link, you know, and also Eccale, it became, you know, considerably, how to say, sophisticated theory that applies to singular situations. Now, in the, in the work with Mathilde, what we have found is that we have applied, if you want, Tanachian formalism, which was at first, you know, formulated by Rothen, you can then, okay, develop by many other people in particular, by the link. So, what we have found is how, if you want, there is a natural Tanachian category of, how to say, of differential systems, you know, if you want of connections, I mean, of modules, and which is associated to the renormalization problem, and which embodies all the previous property which I talked about. Now, the main, main idea is the notion of an increasingly flat connection. So, for that, I have to make a little bit of geometric thinking. And, you know, you have to think that somehow there is, when I was talking, when I was telling you that there was, you know, the epsilon, which was a complex number, say very close to zero, but you don't want epsilon equals zero. So, what you do is you take a puncture on disk, which you call data star. Okay. And, but there is also this new, this new parameter. And this new parameter, when you combine it with the epsilon, what you get is a space of dimension two, complex dimension two. And in this space of complex dimension two, which essentially, you know, it fibers by the multiplicative code, gm, which is c star, you know, it fibers over the disk delta. But you want to remove the part which is above zero. And the part which is above zero, you call it, you know, sorry, I call this, I call it pi minus one of zero, I call it b. And I mean, I will talk later about the meaning of pi minus one of z, you know, in terms of the constant, the constant h part. But what happens is that because of this independence of the negative part in the big complex position for the, for such loops, what you have is that there are associated, in fact, to what are called ecclesiangular connections. So the ecclesiangular connections are flat connections, which are invariant under the multiplicative code, but which are such that when you restrict them to a section from, you know, from delta to b, so this is one section, this is another section, then, you know, the singularity when you get to the point zero are the same. I correct you. So then what we have found is that applying the talent can form a reason, which is a beautiful thing, you know, so what it tells you is that if you have what one calls the talent can category, so it's a nabellan category, but it also has like a tensor product. And what you assume now is that this category has what is called a fiber functor. So it has a functor which goes to ordinary, for instance, vector space is when you are called a field. And then you look, you one can prove abstractly under certain conditions that it defines an algebraic affine code, which is given if you want as a functor from an arbitrary commutative ring to the groups. And the corresponding group is like the automorphisms of the fiber functor when you take it over the ring. So what we prove there is that if we take the category of ecclesiangular flat pandas, then it turns out to be equivalent to the category of representations of finite dimensions of a certain algebraic affine group which is uniquely determined. And it turns out that this group is a semi direct product by the multiplicative group, which acts by the way by the loop of radiation by the radiation by the loop number of a certain unipotent group. And this unipotent group is uniquely determined. And it is the unipotent group whose le algebra is generated freely by a generator e minus n in each degree n for every integer n. Now, okay, I should mention just briefly passing that similar group can be can does appear in motivic Galois theory, but not in a canonical manner. So it's very elusive to understand the relation. Now to go a little bit deeper in what happens, as I said, you know, we are dealing with irregular singularities. So, I mean, it's very connected to the theory of Ramis, I mean, the exponential torus of Ramis. And what happens is that one needs in order to write formulas to use a device which is called the expansion or time of the exponential, by the way, a beautiful paper of Iraqi, going back to the 70s, in which, you know, I think it's in the analysis, in which he gives a beautiful general theory for this expansion. So this is okay, something which is well understood. And it's a time-ordered exponential. And it's very useful in order to write solutions to differential equations. Okay, so it makes sense in all five of us. And it turns out that behind the scene, in what I told you before, there is a certain canonical morphism from the additive group to the group, to the pro-unipotent group U, which I defined and which is, you know, underlined the previous, the previous result of this theorem here. So this, this is the underlying group U. And I mean, this, it's defined by this formula. And as we shall see a little bit later, it embodies exactly what this is called the renormalization group, which is just a subgroup of the group we are dealing with, which is much richer, because it's a highly non-abelian group. So it turns out that there is an object, which is defined by a time-ordered, so this is a time-ordered exponential. The y, which appears here is the grading by the loop number. Okay. And, okay, so all this stuff makes sense. And the funny thing, which at the moment has no very good explanation, is that when you expand this universal singular frame, I will explain what is its whole, you know, you get the same coefficients as in the local index formula that we have with Arrimous Coefficient, we were working a few years before. But this has not yet found a conceptual explanation. Now, the main result is the following, is that if you take a pro-unipotent affine group, you are to a great commutative, exactly what happens in physics, thanks to the ophthalmology of Dirck. Then, first of all, there exists a canonical bijection between the equivalence classes of flat equisangular connections and graded representations of this universal stuff that I defined, okay, to the group G. Or equivalently, of course, you can make cross-product by the additive, by the multiplicative group GM corresponding to the grading. Now, the universal singular frame provides universal counter-term. This is a fantastic fact. And I mean, it's related, you know, to what are called the gross-toothed relations. Then, given a loop, universal singular frame, it maps automatically through the representation row to the negative piece of the Dirck of the composition. And finally, the renormalizing group, which this is love, you know, which is one parameter subgroup of the group assigned to the ophthalmology, is obtained as a composition of the representation with the RG, which was defined before as a morphism from the additive group to the U. Now, you see, one cannot refrain from quoting Cartier, because perhaps Cartier had a slightly different motivation. But anyway, he had the right, he had the right vision in the sense that what he wrote is that l'apparente de plus en plus manifeste entre le groupe de groupe de Votlandic timelapse. That was another inspiration, because it came from member theoretic stuff, d'une part. Et le groupe de renormalisation de la directive des champs n'est sans doute que la première manifestation d'un groupe de symétrie, des constants fondamentaux de la physique, un espèce de groupe galois-cosmique. So when we found with Mathilde, you know, this group, I mean, this group which was coming from the Danakian category and so on, we couldn't refrain from calling it, you know, the cosmic galore group. I mean, it's really what it is, because as I said before, you know, from the work with Dirk when we're acting on the coupling constant, this group actually maps to the group of given theory. And in terms, it maps to different offices of the coupling constants. So in fact, this cosmic galore group is acting exactly as Cartier was sort of envisaging. It really acts on the fundamental constants, of course, as you know very well, you know, I mean, the fundamental constants of physics, I mean, they are not constants, I mean, they are functions, they depend on the energy scale. So this is exactly what's going on here. Now, all this, you know, leads me to galore because the idea behind the renormalization group, the idea behind all of this is that, you know, when you do physics, you find out that there is something very elusive in the renormalization process, which is that there is still some ambiguity. And this ambiguity relates to fundamental idea on ambiguity, which is the idea of galore. I had, you know, the occasion to give a talk about galore. And I mean, when I talked about galore, I said the following, you know, that I say galore is a rare example, perhaps only equated by some poets or musicians of a creator, which on the 200th anniversary of its birth, appear still, still so young, and Franko, I don't know how to translate it, I don't know. So what I continue by saying is that one can assert that it's still an ambiguity, which is food of its own mathematical source, is like a savage animal, you know, it's like a wild animal, which has never been captured by the modern formalism. I mean, God and it was very close to capture it with, you know, than I can formalize them and so on. But I mean, this is a striking contrast between the small number of pages that galore left that is death and the incredible influence on mathematics. Now, you know, there are misgivings about galore because many people say that what galore did was to, you know, invent the galore group and to understand symmetries and so on. But this is very far removed from the reality of what it did, what it did. You know, there is always this contrast, you know, between the formal things and the things which are very concrete, which are behind. Of course, this contrast is very present in modernization, but it's equally present in the work of galore. And what one has to know, in order to appreciate galore, is that when he was 17 or 18, he wrote a paper in which he defined finite fields, which in Anglo-Saxon countries are called galore fields. In France they are not, because I mean, it's a little bit, I have to say, it's leading to call, call the galore. You immediately think about his death when you say that. But what is amazing is that when he was 17 or 18, he enunciated an amazing theorem which even now, if you try to prove it, you will have trouble, even though you think you know galore theory. What is the theorem that he announced? I mean, the theorem is announced around here. I mean, it's like he's saying that if you take what he calls a primitive equation, then for this equation to be solved, it's necessary and sufficient that you can index its roots by a finite field fq. So you label the roots by alpha a, where a belongs to fq. And the galore group has to be a subgroup of what? Of the affine group, okay, they are a x plus b group of fq, okay, but cross product by the Frobenius automorphism, powers of the Frobenius. So this is absolutely mind blowing. I mean, it's amazing that he could find this result at that age. And moreover, you know, when you look carefully at the paper of galore, you find out that what was his motivation? His motivation was not that of Lagrange finding invariant and so on. No, no, no, no. His motivation was to find all relations that hold between the roots of a given equation. And I mean, what you will find out if you go deeper, you will find out that the way he did it is by finding an auxiliary equation, which is a much higher degree, such that the root of your given equation, you know, the roots like alpha, beta, and so on, they are all rational functions of the root x, you know, of the additional equation. So for instance, alpha is now alpha of x, beta is now beta of x, but they are all rational functions. Because x would be the solution of some equation. And now, you know, you say, okay, that's very nice. But how do I know x? Well, how do you know x? x is a solution of another equation. So these were solutions of an equation, p of x equals 0, sorry, p of alpha equals 0, p of alpha equals 0. Okay. And this x is the solution of another equation, a much higher degree. So the equation is like q of x equals 0. So you wonder, okay, well, you have replaced this one by this one. But what did you gain? Well, you gained something tremendous, because how do you solve an equation q of x equals 0? Now I am doing the case of Galois, in the love of Picard with you and so on. How do you solve it? Well, you just take all, you just take, you know, all polynomials over the field you are interested in. So you take all polynomials, k of x, okay, and you divide it by q. Now when you do that, of course, x is the solution. That's your x. So x satisfies q of x equals 0. Well, that's fine. Yes. But now you know that all the roots are rational functions of this x. So if you want to know if there is any relation you can imagine, rational relation between the roots, you plug it in and it will hold. The computer will tell you if it holds or not, not up to x, you know, I mean to tell you exactly. Because what you do is you take this rational function of these roots and you put it here and you wonder whether or not, you know, it's zero in the quotient, mainly if it's a multiple of q. Now, this is an amazing powerful thing. So this completely formal way of solving an equation becomes a center, you know, all the power which is absolutely amazing in the ends of Galois. And in fact, what Galois writes was his problem was to find all rational relations between the roots of an equation. It was not to find divine functions or whatever, you know, you have, of course, you have the obvious relations which are the symmetric functions. But what he found is that in general, there are other relations, and it is this which led him to the Galois. Okay. Okay, so now, since I have a very little time left, what I would like to do, you know, is to give you and by essentially some open questions. So there is a fact, you know, which is that, okay, when you look at this universal singular frame, what it tells you roughly is that, you know, when we're letting the epsilon go to zero in the dim red stuff, well, we shouldn't try to land in the geometry we're used to. I mean, the universal singular frame is telling us that we should follow this universal singular frame. And in fact, we should correct the geometry that we are. So that, you know, the normalization is actually taken into account by the geometry. Now, a little step towards this has been taken in the book with Matthew, okay. Well, what we have done is we have given an incarnation of the space of dimension z. It's very tricky because it's like two stuff. But what we have found is that at the one loop level, it works perfectly well. And you know, when you do the two first month stuff of renomination and so on, you have to deal because of gauges and chiral properties, chiral number, you have to deal with the chirality. And there is a recipe which is called the Brighton-Loner Mason prescription. And at the one loop level, this prescription corresponds to taking the product, the standard geometry, by a very specific spectral people. So here I am alluding to what is non-competitive geometry. I don't want to spend time on it. I mean, you know, but what it is, is a geometry which is based on the geometry itself is defined by the propagator or kernels. And this propagator, it is this, which is the inverse of the hierarchy, which defines the geometry. And the beauty is that quantification can already be taken into account at the one loop level because this propagator gets stressed by the quantum fields. So there are formal series in each part. Now, what we have done, if you want, in developing this geometry with Chamsidine and also with Walter and Selecom, is that we have developed an action. So now I am not no longer dealing with any quantum field, I want to deal with a very, very specific one. And this action actually depends on the geometry only by the spectrum of this operator, I mean, which is the inverse of the propagator. So it's given by what is called spectral action and so on and so forth. This spectral action, I mean, gives you, you know, in its expansion, we use important terms which occur in the action. And I mean, moreover, I mean, it allows you to start computing when you look at the infractions and start computing the various terms that you would have as counter terms. Now, I want to end if you want to start with two questions. So as I said, you know, the spectral paradigm of the geometry allows one to take into account the quantum correction. As I said, you know, the one particle level. Now it turns out that quantum field theory tells us, you know, that of course, restricting oneself to the one loop level or to the one particle level is a little bit too naive. And there is a fundamental question which whose answer, I don't know, I just have some guess about it, which is what is the mathematical formalism, which is, which will allow us to take into account the n particle level. And then my guess is that it's probably dual to the algebraic case theory of Coulomb. So there is this, of Don Coulomb. And there is this theory, which is very sophisticated. The reason why I say that, you know, it's not, it's because of Schringer term. So over, you know, in the more recent times with Schemes-Sedin, Mokhanov, and Seylokon, what we have done is that, you know, we have obtained by analyzing the geomology and the duality between geomology and chaos theory, we have obtained, either as in their relations, which really, I mean, made me completely happy because I had the impression, you know, that there was no longer any problem with the understanding of the gauge theory, the gauge group and so on, which appeared in the stone in their force upon you by this duality. And what happens if you want is that instead of, you know, as being in a very, very, how to say, you know, arbitrary stuff, I mean, we are in a stuff which, because of this duality and so on, is the one that can be encapsulated by the C plus non-trivial recipe. Now, the second question is, as I said before, what is the geometric meaning of dim reg? Here, I mentioned, you know, that when you look at this vibration that I mentioned before, you know, the fiber over z, which is in the disk here, are the possible values. It's very tricky. It's not of mu. It's the possible values of mu to the power z times h bar, h bar in the Planck constant. Now, as I said, you know, dim reg can be understood as far as one loop from the graph are concerned. And this is based on this dim reg and so on. But the dream that I have is that when we'll be able to reconcile the understanding of the standard model capital to gravity, coming from pure gravity on the fine structure of the base time, with renormalization, and that this reconciling, if you want, would come out somehow, you know, correspond to the sort of optimal realization of what Riemann was saying in his inaugural talk, namely that, you know, that the geometry, the true geometry would be entirely based on the forces which are involved in the very small, okay. So I will end on this point. Thank you for the patience. Thank you very much, Alain, for your talk. That was brilliant. I'm sure there are many questions for Alain as now here. There's even the video so you can see Alain. Hi. If you have questions, you can either raise your, if you're a a panelist, you can just go ahead and speak. Otherwise, you can raise your hand and then we can give you permission to speak or you can put them in the chat as well. Maybe I have a question for Alain. Just a very general question. Because why don't you remind us of your approach to the standard model and so on? What is your approach to the things which we don't understand in the standard model or beyond the standard model, like dark matter, dark energy and things like that? Do you have any approach from the bank to the tributes? Not really, not at the moment, except for one thing, which is that many physicists have been misgiving. They think that our model was sort of ruled out because of the X-mas. There was this miracle that very few people are aware of, which is that in 2010, early, we had a paper and in that paper, we had described the full model in very much detail and there was an additional scalar field which was there in 2010 and which we had not taken into account during the renormalization group calculations. Now, so because of the discrepancy in the X-mas, I had given up and in 2012, I received an email from Ali and Ali told me, look, we were stupid because there are three independent group of physicists who have invented, but really without any reason, they have invented a new scalar field and when you take into account this scalar field, the X can become stable at unification energy, which is exactly what was ruling out our prediction. And so then I looked in our paper, 2010, so Ali was saying we were stupid because we had this scalar field in our paper. So of course, I didn't believe him. I looked carefully in the paper and so on and everything was right. The signs were right, everything was right. Now, the reason why I mention it relative to dark matter is in one of the three groups, this scalar field was called the darkon and was supposed to be responsible for dark matter. So I mean, of course, I'm not really on top of all these things. I mean, Ali is, but this gave me a lot of hope. Now, I don't have a good answer to your question except for one thing, which is that very surprisingly also, the spectral triples are not only relevant at very small distances, like in this incredibly full of intuition phase of Rima, but it's also, they are also extremely relevant at large distances. And the reason behind that is that the way the universe communicates with us is spectrally. Let me, the universe sends us what it sends us is barcodes. These barcodes are the absorption lines in spectra that we get from very distant galaxies from very distant stars and so on. So what I am saying is that the universe has not only language, it has a way of writing. It writes in barcodes. Now, these barcodes is exactly what is present in the spectral triples. So that's the reason why I believe that we would make a great step if we would begin to think of a geometry in a spectral manner, kind of four year mode. This was advocated by Vettman, by the way. That is strongly that you have to work in momentum space. So I mean, I don't have any answer to your question, but just these two potential possibilities. Thanks, Elaine. I just have a quick question regarding the fact that you have this geometry on the renormalization group. But in the beginning, you also had a geometry on the space that you started with, right? I mean, could be just that space or it could be a spectral triple or something more fancy. So I was wondering is, how does the geometry of the renormalization group of that scheme change when you change the geometry that? That was exactly my question. This was exactly my question. My question was, what I don't know is how to take the limit, if you want, of the geometry that you obtain by making the product, say, of an even flat space by a geometry of dimension epsilon, not in a trivial way of letting epsilon equal zero, but by following the singular frame. And this I don't know how to do. And moreover, I would add that the model we had with Matilde for a space of dimension epsilon is probably not the right one. I mean, we just wanted to throw one model possible. But I wouldn't play my soul on this particular model, you know what I mean. So I mean, but this is a fundamental question, indeed, because you see, one can feel when one handles renormalization and so on that there is something wrong, indeed, you know, in working just in that dimension. I mean, it looks rather stupid. But on the way, the way to get rid of it is very subtle. It's incredibly subtle. And if one could make that geometry goes out, it would be a mess. Thank you very much. I think in the in the interest of time, let's try to go to the next talk. Let's thank Alan again for his brilliant talk.