 It's a pleasure to be back here. So as Emmanuel Imou said, I was here for twice two months, and I enjoyed a very productive stay here at EHUES this year and last year. Now, I have to confess right from the beginning that what I'm going to talk about, in fact, has nothing to do with, I guess, signal transmission and information technology and no Shannon. So I'm a little bit, sorry? It's OK. Thank you. So in that sense, but I couldn't. I mean, I never worked on these subjects. I mean, there will be a randomness coming up, but in quite a different way than in the previous talk. So this being said, so I mean, now it's after lunch. You can relax. It's not important for this program. So I want to start by telling you what kind of, I mean, starting with a historical example, what one understands by a random medium and what one understands by effective behavior and briefly mention what more recent applications might be in that area. And then I want to address two specific issues we've worked on. And so one of them, in a certain sense, is motivated by numerical analysis. So really understanding the numerical errors that very popular ubiquitous engineering method makes. And the second topic has to do, I mean, now if I would be bold, I would say it has to do with uncertainty quantification. So understanding the error one is making or understanding the fluctuations, the amount of uncertainty characterizing the amount of uncertainty in solution with random data. OK, so here is, by the way, this is probably a pointer also at the same time. So this is, as I said, kind of a historical example. Probably that's the first time in the physics literature a problem in effective behavior of random media was posed and solved in the famous work of Maxwell on electromagnetism. And he was interested in kind of effective conductivity or effective resistivity. And so in the back of his mind, he had the following situation. He had kind of a slab of a conductor, a homogeneous conductor. So homogeneous conducting medium. And he was imposing a voltage difference between the bottom and the top surface. And as we know, since the medium is homogeneous, this voltage difference leads to a homogeneous current field, which here is indicated by the green errors. And there is a proportionality between the strength of this homogeneous current field and the voltage drop divided by the height of the slab. And that's a material constant. And that's the conductivity of the material. Or one over that constant is the resistivity. So that's kind of, in a certain sense, the nice homogeneous situation. And then he was imagining a slab not formed of a homogeneous material, but formed of a heterogeneous material. So a material where you have, which is made out of two bare materials, where, let's say, you have a background matrix, which is made of this material with, let's say, spherical inclusions of another material with another bare conductivity k1. So you have kind of a mixture of a material with conductivity k2 and with conductivity k1. And clearly, since this material is no longer homogeneous, even if the voltage is kept constant down here and kept at a different constant up here, the current which you get is no longer constant. The current density is not constant. In fact, will be some complicated vector field. But nevertheless, you imagine that, or Maxwell imagined, or Maxwell knew that if this, the height of the slab was large compared to the typical distance of these inclusions. So if there was a clear scale separation between the characteristic length scale of the heterogeneity and the length scale of the sample, then this heterogeneous behavior, in fact, behaves like an idealized homogeneous behavior in the sense that there is a clear proportionality between the voltage drop and the average current which flows through the sample. And then back then, of course, you couldn't resort to a computer to determine that effective behavior. So he made the assumption of a dilute regime. So he made the assumption that the typical radius of these droplets is much smaller than their distance. And in this dilute regime, he was able to actually derive, theoretically derive an asymptotic formula for the effective conductivity of this mixture in terms of the bare conductivities k1 and k2 and in terms of the volume fraction p. So that's the first example of such kind of, in the physics literature, of finding the effective, even identifying an asymptotically correct formula for the effective behavior of a random medium. So nowadays, people are interested, or people don't want to kind of restrict themselves to situations, to these dilute situations, where you can, in a certain sense, you can understand the behavior, at least asymptotically, by paper and pencil. But they want to treat, let's say, realistic situations like the permeability of a porous rock or the elasticity properties of a certain mixture here, it's carbon and polymer. And then, since there is no diluteness here, they have to really resort to numerical simulations to extract from kind of the specifications of the statistics to extract the effective behavior. And the engineering method, which is used is the method of the representative volume element. And that was the starting point for Antoine Gloria and myself. We wanted to understand the error one makes in that method. So it was, in fact, that was a question which I heard from Wayne and Eu, that there was still some work to do to understand that from a more mathematical or rigorous point of view, what are the errors made in this very common method. So what I want to do in this talk is I want to first tell you this story. So kind of the error analysis of this represent. First, I'm going to explain to you what this representative volume method is. And then I'm going to tell you why it would be interesting. It is interesting to have an error analysis and what the error analysis is. And then I want to go to, I mean, if I have the time, still address a second issue which has really to do with the characterizing the fluctuations of the solution, characterizing the fluctuations, in this case, of the current field, and so characterizing the variances. OK, so let's start with this representative volume element method. So from a mathematical point of view, this heterogeneity of the medium is described by a coefficient field or a tensor field. So something which expresses the possibly anisotropic, microscopic conductivity of the medium or whatever material property you're interested in. And typically in these applications, that's a uniformly elliptic, as we say in PDE theory, a uniformly elliptic coefficient field or in differential geometry, you would say a metric. And then once you have this metric, you can form a differential operator. We call it an elliptic differential operator which contains the physics. I mean, for instance, if you stick to the field of conductivity, this operator acting on a function would be equal to 0 if u is a potential that leads to a stationary current. And so that's from a mathematical point, that's the abstraction. That's what we're dealing with. And now in random media, you have in a certain sense incomplete information on how this coefficient field looks like. So in a certain sense, you just have statistical information, which anyway you think is much more appropriate than the detailed local information. So mathematically, that means you're not working with a single coefficient field, but you're working with a probability distribution on the coefficient field. So the coefficient field is a random field. So something which depends both on the position in space and on your realization. And throughout the talk, it's perfectly fine to keep the following simple example in mind and the numerical simulations which are going to show to you are based on this example. So we call it kind of Poisson type example. So think of your space, our d-dimensional space. Think of d equal 2 or 3. I mean, the numerical simulation, I'm going to show to you it's a two-dimensional, but theory doesn't care for the dimension. So the red points here are distributed according to the Poisson point process in all of space. And then around every of these points, you draw a ball, you consider the ball of radius, let's say, 1 quarter. If the density of the Poisson point process is 1, which means roughly speaking, the distance between the points is on average equal to 1. So then you look at the union of all these balls. That's the blue set. That's a random set. And then as a very simple example, you take this coefficient field to be equal to 1 times the identity on the blue region and lambda times the identity where lambda, I think, is 1 over 10 in the experiment. So some number different from 1 on the complement. So that's a very simple example of such a random coefficient field. Before I go to this slide, so here you have a simple, very easily, I mean, a random medium which is described in very simple terms. I mean, in a certain sense, you have three parameters. You have the number density of the Poisson point process, which I arbitrarily set equal to 1. You have the radius of the balls, which I choose to be 1 quarter. You have the contrast of the medium, which, let's say, is 1 tenth, so a factor of 10. And that's it. So you have three parameters. And now you want to extract this effective behavior, the effect on large scales, the effective resistivity or conductivity of this medium. And so in a certain sense, you want to go from three parameters to another finite number of parameters. So you start with something very low, the very low-dimensional description, and you want to get extracted very low-dimensional information. But the map is not explicit, so there is no simple formula which would relate these three numbers to the d-square numbers you're interested in. So therefore, what's done in engineering is this representative volume element method. So in order to get an idea of what that is, it's really good to think in terms of this physical application. So the functions, you should think of the functions as electric potentials. The negative gradient then by Maxwell is the electric field. The coefficient field has the meaning of conductivity. So if you multiply the electric field by the conductivity, you get the electric current. And the current is stationary if its divergence vanishes. So what's the purpose of the effective behavior, the effective conductivity of the medium? It should provide a linear relationship between the average potential gradient to the average current. So the microscopic conductivity is the proportionality between the local field and the local current. And the effective behavior should give you the relationship between the spatially averaged field and the spatially averaged current. That's what this is supposed to be. So how do engineers this in practice? They artificially introduce some finite size domain. So the simplest thing is to think of a torus. So they introduce some box and identify boundaries. And then in this finite size region, they sample their medium according to the same specifications as in the whole space. So they look at the Poisson point process with the same density on the torus and exactly the same things. So now you have kind of sampled what you think is kind of your right statistics, but not in all of space, but in the torus. And then they solve D, where D is the dimension. So in two dimensions, just two simple linear partial differential equations on the torus. So they seek a kind of a periodic function phi i, which solves this elliptic differential equation. So that's I can write this on the blackboard because that's important. So you take the gradient of x i plus phi i. And this is supposed to be periodic. And you want this to be equal to 0. And of course, I can write this here as e i plus gradient phi i, where e i is kind of the i-th unit vector. So this function is called the corrector because it does the following thing. You may also call it harmonic coordinates. You start from the f-line function in the i-th coordinate direction, and you want to correct it in such a way so you add to it the function phi i that the resulting function is harmonic. So this is why sometimes these coordinates are called harmonic coordinates or why this is called a corrector. So that's the problem which you solve. And then you take the current, which belongs to these harmonic coordinates, so this expression, so the field multiplied by the conductivity. And you average it over the torus. And that is what you take as approximation to your homogeneous coefficient. So let me also write it here. So a hom e i. So that's what's done in, that's the representative volume element method. And here is an numerical simulation to just give you a flavor of what you're doing. So you're picking a coordinate direction. Here we picked the first coordinate direction in our two-dimensional model. So we're looking for potential, which is a periodic perturbation of the potential that linearly grows in x1 direction so that its current is divergence free. So here you see the potential lines. Clearly, they're kind of dense in the region where you have a low conductivity and they're sparser in the region where you have a high conductivity because there it's more like a conductor where the field should be expelled. And those are the flow lines. That's the current which belongs to this. And now you take this vector field, you average it over the torus, you get a vector, you get two numbers, and that's the first row of the effective behavior. And now you do the same thing in the second coordinate direction. So in this case, kind of the level lines will essentially be horizontal. The gradient will be mostly pointing in the e2 direction. You get the same thing. And you take the average of this field and you take this as the second row of your matrix. So that's exactly what you're doing here. And so, by the way, it's not surprising that this number here is exactly equal to this number, kind of the symmetry of the resulting matrix is, in a certain sense, built in. And now the question is, how much should you trust these numbers? And intuitively, it's clear that you shouldn't trust these numbers if this representative volume element is too small. Then it certainly has nothing to do with the effect which you would capture. So these numbers should become better and better. The larger this representative volume element becomes. And that's kind of the error analysis we were interested in. So before turning to this, one should kind of also point out that the answer which you get here is a random answer because it depends on the realization. So it's not yet a deterministic quantity, but it's a random answer. It's very clear that if you sample your medium once more, you will get a different solution, and therefore you will get a different average. So this is still fluctuating a random number. But on the other hand, it's also intuitive that as you make the size of this box larger, and here I didn't draw it as a larger box, but I rescaled it back to unit length and made the ball smaller. That's mathematically equivalent. The variance should go down. So that's what we call the random error. And that's indeed what one sees in the numeric simulations. So here you have three different realizations of the random medium. Of course, you get three completely different solutions, three completely different current fields, and three quite different matrices. They differ, let's say, by 10%. So that's what I said. I mean, the answer which you get is what we would call a random variable because it depends on the realization. But if you were, so here again is what we saw before. But if you do the same simulation, which a much larger representative volume element, then you see that the fluctuations clearly go down from about 10% to about 1%. So the question is, yeah, go ahead. Do the results rely on the assumption that there is speculation of the Boolean model? They don't rely because that's not necessary here because we didn't put the conductivity equal to 0 in the complement. If we would allow this, remember that this coefficient field was defined to be equal to the identity and equal to lambda here. And I think lambda was chosen to be 1 over 10, but positive. If this lambda were allowed to be 0, then you would get in the situation of percolation. And that's, of course, also a well-studied problem. But it's not because that, of course, has additional difficulties or additional challenges. I'm not addressing that in this talk. So here, so therefore percolation, whether it's percolating or not, will play a qualitative, I mean, some quantitative role, but not a dramatic qualitative role. Does that answer the question? OK, so that's only one type of error. There is a second error, which is slightly more subtle, namely that even if you disregard the fact that this is a fluctuating quantity, a random quantity, by, for instance, taking its expectation, it still would not be the right value because of the following phenomenon. When you go to the torus, you've kind of falsified the statistics. If you unwrap your torus into the plane, I mean, you have kind of a periodic coefficient field, which means you've introduced kind of spurious long range correlations. Instead of having a Poisson process, you're actually looking at the wrong statistics by doing this. So that's a systematic error, kind of a bias. And also that effect one's seeing. So if we're just looking at the expectation of this quantity, of course, numerically, we're looking at the empirical average. But let's say we, when Alkriman did many, many sampling so that we actually access the expected value, we know that because of symmetry considerations that it's isotropic, so it's just about this number. And then even this number, so even after taking the expectation, still depends on L and only converges to its final value as the size becomes large. So there are two types of errors involved in this problem. I mean, there is kind of random error, which comes from the variance of the answer. And there is a systematic error, which has to do with using the wrong ensemble. And of course, so qualitative theory, which is 30 years old, tells us that both types of errors go to zero. And in fact, we have a nice Pythagoras rule in probability space. The expected square error is the sum of the square of the random error plus the square of the systematic error. And both of them go to zero. And now we were interested in at what rate do these two errors go to zero. So why is that kind of a question of practical importance? Because it depends, it would determine what type of numerical algorithm you're using when you want to infer the effective behavior. Because when an engineer does that, he has two knobs to turn. He can either look at a few very large samples, or he can look at many moderately sized samples. And it's clear that looking at many samples will only affect the random error. It will kind of attenuate the damaging effect of the random error. It will have no effect on the systematic error. So therefore, in order to find the optimal, I mean, if you have a certain number of unknowns to spend, because you don't want to wait three years, but just three weeks to get your result, you have an optimization here. If you want to reach a certain error, you can either do it this way or that way. And which way is the better depends on the scaling of these two errors. So therefore, this is a relevant, from a numerical point of view, a relevant question. And of course, since it's a relevant question, people have kind of looked at this for a while. But it's only since fairly recent years that kind of one got a satisfying and optimal answer. Just let me mention, because it came up in the previous talk, the mathematical tool by which we get many of these results, in fact, is the logarithmic Sobolev inequality. So we're using kind of an infinite, we're using concentration of measure logarithmic Sobolev inequality to get some of these results. But I will not mention, I won't talk much, or will not talk at all about the methods here. So the final result we're getting is the following. The systematic error is much, much smaller than the random error. The systematic error, I point out the difference here is not a square. So the systematic error is about the square of the random error. So it definitely pays to look at many, many realizations because you have to first make sure that you make this effect, the bad effect of the random error small by looking at many realizations, and only then you have to worry about the systematic error. And since we wrote this, kind of the things have kind of progressed. But by now one even understands that the fluctuations are approximately Gaussian. So even the error not only has the scaling, but it also has its own structure. And that's a little bit what I want to talk about. And the second part of my talk, how much time do I still have? About 20 minutes or so. OK, so that's much more than I will need. So the first part was about these kind of error estimates for this engineering method. And the second part is about the fluctuations of the solutions. So that's now changing a little bit perspective. So why does homogenization pay? Why does kind of this theoretical concept pay? Well, it's related to a separation of scales. And that is something which I already mentioned in the context of Maxwell's example. His separation of scales was the separation between the typical size of the domain, which was the width of the slab, and the typical size of the medium, which was the distance between the inclusions. So we have a macroscopic scale, which is either set by the size of the domain or by the characteristic size of the right-hand side of the equation, and kind of the microscopic scale, which is set by the medium itself, so the diameter of the balls, the typical distance between the balls. And let's set this one equal to 1. So we non-dimensionalize in this way. So we have this scale separation between a large geometrical external scale and this much smaller intrinsic scale. And now let's think of the simple elliptic equation with the right-hand side, f. So we put this length scale into the right-hand side, which means we're looking at the right-hand side, which varies on a very slow, on a very large scale, which we call l, and whose amplitude we scale like this so that the amplitude of the solution is of order 1. So homogenization tells you that instead of solving this very complex problem, where you would have to, at the same time, resolve the small scale of the coefficient field and the large scale of the domain of the right-hand side, you can solve a much simpler equation with the effective behavior with the homogenized coefficient there. So qualitative homogenization tells you this fluctuating solution, this random solution, here in blue, is, in fact, pretty close to this non-random, non-oscillatory solution. And of course, also there, you want to understand what is the error. And in fact, in periodic homogenization, that's very well understood. Kind of the typical fluctuations happen on scale 1. And the typical deviation from the limit is of the order of 1 over l. And in fact, the same is true in the random case. And so when you really want to quantitatively compare the solution of your heterogeneous equation to the solution of the homogeneous equation with the effective behavior, you go back to this object, to the corrector, because that helps you to understand the error. So here, again, is kind of this schematic picture, which I drew here. So these functions phi i correct the affine behavior in such a way to get a solution, to get a harmonic coordinates, to get the solution of this equation. And now, therefore, instead of comparing u directly to u bar, you should compare u to a modulated version of this corrector. So that's what I tried to explain by this picture. So down here, you have the true solution, which kind of oscillates around the homogenized solution. And in fact, you get a much better approximation of the true solution by taking the tangent to the homogenized solution and by doing this construction on top of the tangent. Then you get something where also the gradients are close, not just the solution. And in fact, the relative error is like in periodic homogenization. So here, you wouldn't see the effect of randomness in a sense. That's a result which tells you random homogenization is not worse than periodic homogenization. But in fact, there is much more structure in the random case. And that has to do with the fluctuations. And that's the story which I want to tell last. So let's suppose you were interested. So here, again, is our solution of the heterogeneous problem, which we don't really want to compute. So we seek kind of theoretical understanding of it with the right-hand side, which has the right scaling. And let's suppose we're not interested in the solution in an extremely point-wise way. We're not interested in understanding the solution in every point. But we're just interested in understanding certain macroscopic observables, like in oil recovery. What's the mean flow rate? So macroscopic observables would look like this. You would take some spatial average of your solution u. And the spatial average is on the same scale as the solution on this macroscopic scale l. And now the question is, can we get a better understanding of the fluctuations of this quantity? So that's now getting a bit in this business of uncertainty quantification. Can we not just understand the size of these fluctuations, but can we really characterize these fluctuations? So a while ago, we found out that they have the order which you would expect from central limit theorem scaling. So 1 over ld. So you see that this is very different than the previous error estimate, where in a point-wise sense, the difference is 1 over l. And here the average error is more like 1 over l to the power d over 2. And that, of course, means that the next natural step was, well, can we understand the limit of this quantity if we put it onto the right scale? Can we characterize these fluctuations? And the first thing we, and now we mean in kind of the small community that's looking at these problems, was to plug in the two scale expansion, so the way that you get a pretty good approximation to your heterogeneous random fluctuating solution by doing this construction of taking the homogeneous solution and modulating it with the corrector, that was the first guess. And then you would be drawn to understanding the covariance of the corrector. So therefore, what we first looked at was kind of getting a better understanding of this object here, the covariance structure of the corrector. And together with Jean-Christophe Mourin, who is a probabilist in Lyon, we were able to kind of show that there is a limiting covariance structure on large scales, and that it's, in a certain sense, even if we could characterize it in some way, it has the homogeneity of the Green's function. So here is the covariance function in blue, and that would be the homogeneity of the Green's function. So it's homogeneous of order minus d over 2. So I mean, he was a probabilist. And of course, if you see something where an object, a random field, where the covariance structure is the one of the Green's function, you think of the Gaussian free field. But in fact, it's not the Gaussian free field. It's not equal to the covariance structure. It's not equal to some Green's function. So that was the first interesting finding. It's more subtle. And then Jean-Christophe Mourin with Gu from Stanford even found out that what motivated us in the first place to look at this quantity, namely the fact that we could understand the variance of any solution by looking at the variance of the corrector, is not as simple as that. Because both of these limiting kind of variances exist, but they're not equal. So it was not the right idea to use the two-scale expansion in the simple way in order to reduce the variance of any solution to the variance of the corrector. So that was a little bit of a puzzle. We don't quite understand the covariance structure. And then very recently with Miltia Durinx, who is a PhD student of Antoine Gloria in Brussels, kind of we found what I think is the right way to understand these fluctuations, at least to leading order, and to characterize them. And what we introduced is something which we call the homogenization commutator, an object which is built on the objects which any way you have to compute on the correctors. And it looks at the difference between the current and the field to which you apply the effective tensor. So it's a very simple object. And we called it commutator because in a certain sense it's the difference between immediately applying with the microscopic conductivity or doing the homogenization and then applying with the effective conductivity. That's a random matrix field. And the first observation is a completely deterministic one, namely that the fluctuations of our observable can be in a path-wise, in a point-wise way, described by this kind of played back to this quantity, to this capital XI, to this homogenization commutator, by solving in a joint problem. In a joint problem, remember that G is kind of the observable we are interested in. We're solving in a joint problem on the homogenized level that defines the V-bar. And if we have the U-bar, which is the homogenized solution in the V-bar, we can to leading order with a kind of precise relative error characterize the fluctuations. And the second part of the finding was that this strange object, or which is not that strange, on large scales behaves like white noise, like Gaussian white noise. So if you look at the correlation function of this object, of this tensor valued object, and you put it on the right scale, then essentially you just see a peak at the origin and then a flat 0 square. So on large scales, this homogenization commutator behaves like white noise, like Gaussian necessarily than Gaussian white noise. So it's characterized by a single 4 tensor. It's a 4 tensor because this object was a 2 tensor. So its covariance is a 4 tensor. And if you put these two results together, you get a complete asymptotic characterization of the variance in terms of this new object Q, which describes the covariance of that thing. So that's the, I think, nice and also pleasing characterization of the limiting variance. And the nice thing, from a practical point of view, the message, in a certain sense in America, I would say, the take home message is that if you're interested not just in the solution, but you're interested in the fluctuations in the variances, because you're interested in uncertainty quantification, you don't have to do any new work. Because anyway, to get the effective homogenized behavior, you had to resort to your representative volume element method. You had to solve for the harmonic coordinates. You had to find the harmonic coordinates. You had to solve these problems which I talked in the first part of the talk, because they give you the effective behavior, or at least an approximate effective behavior. But if you have these objects, then you might as well look at the capital xi at the homogenization commutator, or at least on the approximate level. And you may look at this Green-Cubo type formula, which is a proxy for their covariance structure. So without any additional numerical effort, you have access to, if you want to get a home, you automatically, without any additional effort, you have access to this 4 tensor q. And with help of this 4 tensor q, not only can you characterize the leading order behavior of your fluctuating solution, you can also characterize the leading order behavior of the variance of observance. So therefore, I would hope that this is an insight which can be used in practice. So that brings me to the summary. So what we've been working on quite extensively in the past years is making this type of homogenization of random media more quantitative. And here, I've given you two examples. I mean, understanding the error in the representative volume element method and an example of, if you want, uncertainty quantification to leading order, which turns out to not be computationally more expensive than what you have to do anyway to get the effective behavior. So that's it. Thank you. Is there any questions or comments? So first, a question. So you have discussed fluctuations at the level of the central limit theorem, essentially. Can one use the same techniques and work out the large deviation scale to see what happens to? That's a good question. I think, to some extent, that can be done. And I think that has also, in parts, has already been tackled by the probability community. I mean, one piece of information which I didn't mention here is, I mean, we characterize the variance and we know that they're approximately Gaussian. But as you say, kind of looking at the large deviation scale is another type of question. So my feeling is that this could be done and that this is, in parts, done already in the probability community. But that this requires different types of techniques. And a second question is, since you alluded to in the first slide, this limiting rare perturbation, so very few balls, can one also use this Miami expansion to get, I mean, first order, second order, and perhaps an expansion with a sort of chaos expansion on the Boolean model? So now you mean an expansion in the volume fraction. Yes, in the intensity, which is, yeah. And so I think that has been done. I don't know. I don't think it has. So two of my co-authors, so Missia Durantz and Antoine Gloria have, in a similar model, looked exactly kind of they've proven kind of, I mean, they've shown that this is an analytic function in the volume fraction and kind of developed, if you want some kind of multiple way of getting to all the series. So that can be done not with the Mali-Avain, not with these tools. I mean, in most of our approves, we use, if you want, something like Mali-Avain calculus because we're taking the derivative with respect to the noise, which, in our case, is taking the derivative with respect to the coefficients. So we try to understand how solutions, and that's at first a completely deterministic question, how sensitively a solution at this place depends on changing the coefficient at this place, which is described by the non-constant coefficient greens function. And that's really computing the Mali-Avain derivative, and that then we put into kind of concentration of measure the logarithmic Sobolev inequality machinery to get most of the, many of the estimates.