 We're going to have the first SIAM student chapter talk of the semester, so we're going to have Francois here to talk about cosmology, so you're going to meet a lot if you guys come out, support the SIAM student chapter, hear this talk, meet some people from outside the department, and get some free pizza out of it, like, I mean, that's why we all come to the talks, right, it's the free food, so that's this Thursday and continuing on a weekly basis. So that's my spiel. Okay, so my title of my talk is Relaxation on Measures, and it starts with a pretty familiar situation, at least people who are PDE's, Calculus of Variations type of folks. We have some functional, functional, which is something like the integral of omega of f of u, dx for some integrand f. And you know, so the normal game is we want to find a minimizer of this functional, so we can, you know, a minimizing sequence depends on the f of u, and it's confirmed to be a Themone, and so in the normal sort of game that we play, we have, you know, times, I probably shouldn't use simple letter p, f of c is like the c to the p for some power p, and we really, really like the case where this would be from 1 to infinity, because that means f of u n, all I just got, u n to the pounded, which implies u n is bounded in the space L and p of omega, and when p is strictly greater than 1, Lp is a reflexive space, and as I say, it can be identified by Lp itself. And this is the really nice situation for the functional analysis, because reflexive spaces are the ones for which we can take a weakly convergent subsequence, and so this is sort of how we get a minimizer, like, we'll have a weakly convergent subsequence, so, you know, up to a subsequence, u n will be convergent weakly to some u, you know, p. The case that I want to talk about today is what happens when we don't have this nice p growth? What if we just have linear type growth on our integrand, which is a problem that honestly comes up from time to time, like, in theory of, like, sort of, elasticly, or perfectly elastic plates, you're going to see stuff like this in one imaging model, we really just expect this thing to be linear, and I think it's something that's interesting also in its own right, you know, what goes wrong in this case. So, this may or may not be a familiar story to you guys, but, so in general, right, yeah. It's probably not wrong, so we don't have a cohesivity to even read this. I think there's something like this. Yeah, I'm always thinking that if it's good bags, it might, it's been this long, but don't mess, then you take it and do it. Do you think I'm minimizing sequence? Yeah, sure. Let's say we have cohesivity today. So, ultimately, the kind of problem that I'm talking about is, yeah, yeah, I think I'm being vague here. It's like if we already know that the function is converging, we already know that it's LVB, so that we have this kind of a bound. It's not necessarily going to be the case that, so like the problem in functional analysis is the space of L1 integral functions is not reflexive, it's not even, so the other case that's nice is the dual inseparable space. So this is functional analysis stuff. I mean, if that's crazy, it's just like we can't do the nice things that we normally like to do. So the continuous function with compact support in omega, that space, that of sine red dawn measures, continuous functions are separable. And in fact, we can sort of identify like how one of omega can embed continuously into the set of measures, right, when we think of the function, you know, sort of mu sub u of e defined as integral over e. So now u, as looking at u as like a density of the LVB, so u times LVB measured in Rn is the finite redon measures. So we have this nice sort of picture. I mean, if we sort of expand our space by the set of all redon measures, then we can get at least some kind of compactness out of a bounded sequence. Basically, there's this up to a sub sequence to some measure mu where this means that the integral of phi of x mu of x is continuous with compact support. So one thing to say about this kind of convergence is it's worse. It's like worse than even weak convergence in a certain sense, like it's really awful and it's annoying to deal with. Because for instance, you know, if u n goes weakly to u in L2, let's say, then at least we know that, for instance, the integral over every redon for e, redon, e, e for L, we can say the integral of u over set converges to the integral of u n. So we don't even have this property, which is like, come on, guys. Like in general, the way that I try and keep it straight is if mu n and mu are negative, then we can sort of characterize it nicely. We can say mu n goes weakly to mu. This is the same as mu of an open set is less than or equal to l n of u n of u from open sets. So we have this lower semi-continuity on open sets. We have upper semi-continuity on compact sets. Sort of convince you why what this might be true. It's sort of like, okay, you've got your set here. If we were doing this, we would test on characteristic test functions. Here, our test functions can't be characteristic functions. So they either have to like overshoot by a little bit in order to stay continuous, in which case, you know, you get a little bit more or they have to undershoot a little bit. But you can't exactly get it. So this is saying, okay, if you want the whole open set, well, I mean, you might get a little bit more. Or if you want the compact set, you might like miss something you're about. So just to make sure I was right, so a witness to this failure of convergence might be like the Dirac mass, like moving along some non-floating braille set, and then reaching the boundary. Yeah, yeah. But there's more, yeah, Dirac masses are like a nice way to see, because they're everyone's favorite singular measure, right? But somehow what I think is the, like, that's like a normal kind of thing that can happen, right? And we convergence, things can run away to infinity and stuff, like they can, but some of the real problems like the boundary, like, that things can do one more thing that they couldn't do in the LP case, which is, they can concentrate on singular sets. So that's like this picture, right? Normally this wouldn't be a problem, but it could, the mass will concentrate and become sort of like a house store measure of the boundary here. So in general, in order to say, and this is true of going vector measures, that mu n of e converges to mu of e, we need the total variation of the boundary of e to be zero. So this is like a sufficient condition that you normally need to verify. So this is like, if you're reading like below proof type arguments, they always are making sure we have this, because they don't want to deal with this problem. Because in general, we only have this weak star convergence. So this is just a diatron on weak star convergence. And like, I mean, if you all know, anyone who's taken like measure theory at all knows the perfect example of when we have weak star, but not like, we have L1 controlling not LP control for any P greater than one, right? It's this u n is equal to n times the characteristic function of minus one over two n over two n, right? This is, we're concentrating mass here on the singular set. So we've got this horizontal thing with area like mass one. It's exactly controlled in L1 and not in any LP and u n is not converging. In any sense, it's going weakly star to the direct mass. And this is exactly right. This is obviously the integral of the u n's on the singleton and this in zero is zero. So we wind up with mass on that set. So this is like one of the main new problems that can happen. But the interesting thing is, let's not think of this as a problem, right? Let's think of this as a feature, not a bug. And something that measures are aimed like, like, you know, this is like, okay, we actually have more diverse behavior that we can describe if we look at the space of measures and not the space of who would deal with a function of measures or where it's at. Okay, so this is all well and good. One, we've been doing everything with like functions, like, you know, at the level of functions. But obviously, you know, functions are only so interesting, like just, you know, measurable functions. Really, when this, like people are more or more interested in like so-called space, like integral functionals, more of the type integral f of grad u rather than integral f of u, right? This is normally more the integral functional than at least the PDEs or calculus of variations person might study. In this case, the natural space to consider is not l1 of omega, but w1 of omega. There's the sublime space of functions who are l1. We'll have a distribution gradient that's also in l1. And here we have the same problem that emerges, right? We have bounded sequence in l1 or in w11 by the sort of sublime type inequalities. If our gradient is bounded in l1, then our actual function is bounded in a space of higher regularity. So u n will still, like, have an honest-to-goodness limit. u will have a strong limit u. But u, like, a gradual end is going to suffer from the same problems. It might just convert a wrinkly star to sound the now vector-valued measure. And if you're not comfortable with a vector-valued measure, it's measured except it spins over vector. And let's just leave it at that for a moment. Don't worry too much about it. So we might add this just going to something called du. It's sort of measure. It still satisfies the sort of nice distributional derivative. It's just the distributional derivative is just measure. That is to say, integral of u against some test function is going to equal minus the integral of the test function against the ith component. You think of the vector-valued function as just n, what's it called? Signing read-on measures like speed or whatever if you want. So this is just what you would think it would be, d this measure. And this is d x. So this would be explicit about what measure is on each one. So these are functions whose derivatives aren't functions of measures. And this is, so u and l 1 such that their derivatives are just measures. This has a special name. These are the functions of the bounded variation. And in general, in a sequence bounding w 1 1, all we know is that it has a weak limit, like weak star limit. That's a v d function. That's all we got. And so v d functions are actually really nice. They have some really cool properties. Like if a set e has a nice enough, which I won't go any further into that, nice enough boundary, then the characteristic function of e is going to be a dv function. And so this will be a function that's 1 in the set, 0 everywhere else. And its derivative is actually going to be sort of the characteristic set, characteristic function of like, or I don't know, the normal vector. So somehow, this actually gives us a really nice perspective on geometry. Characteristic sets, if they're viewed as b v functions, their derivative is going to be like the normal vector to the boundary and then the house source measure. So in particular, if we look at the total variation of this, this will give us like the perimeter of the set e. And in fact, this is a whole direction you can go with this. They've got all the sets of finite perimeter, and this gives us a really nice general concept of perimeter of a set. But I won't get into that. This is just to say, space b v is really ubiquitous in geometric measure theory, and it's not just like, in one hand it's a problem, but on another hand, it's a cool place to be. But let's remember what we started talking about integrand, like grad u, right? And in the case where we have LP, where everything is happening in LP, we have this really nice result that says necessary and sufficient conditions for this to be lower semi-continuous in respect to weak convergence. Because this is the final step in calculus of variations argument, u n going weakly to u, and then we want to say it's going to be no greater than grad u n. So this happens if and only if this is the nice result of calculus of variations, classical, you know, you're on a funsakenly only LP space that's confirmed. So, and we know that if u n is going weakly start to u, like now in p equals 1, if u n is weakly starting to u, and u is in l 1, like u is in w 1 1, then we're going to have this nice lower semi-continuity if f is convex. But I mean, what is the right, it's not even clear like how to define this if this is saying evaluate f at grad u of x at every point. It's not really clear what the right definition of this is for BV functions if we want a similar kind of lower semi-continuity. This is just some functional on doting 1 1. We like to extend f of u f to BV in a way so that the lower semi-continuity. So there's a really easy way to do this. For u and BV, what f of u can define to be infimum over the limit of u n over all sequences u n going weakly starting. Done. Okay, this is unsatisfying, right? If you've seen this, this is called the relaxation. Basically, we're just saying there is some, like, we just look at the greatest thing that's going to be lower semi-continuous and below f, basically. And this is the thing that does that. Obviously, this isn't really interesting. What we're really interested in is like, can we write it as an integral function in some sort of way? And here, I'm only thinking about f when you're growth. This thing is not well-defined. It's to make it well-defined when you weigh your growth. But we do need coercive things to actually do the... We can't do some variations. Okay, we'll do it at the time. When are these normally over? Seven. Seven, okay. Again, it is normally bounded. Yeah, omega is bounded, but it won't actually matter. That's curiosity. Ah, yeah. Now, we can't have this. It's nice to look at our nice trivial example of functions going weakly, star, and not strongly. Look at what will happen when we look at f, which is, you know, convex but not linear. So my favorite example of a function, which is convex but not linear, is something like this. So one plus norm of c squared. So a function of square root of one plus c squared. This is equals one. The t equals zero. And it sort of asymptotically approaches linear functions. So this is exactly the kind of function. It's nice and convex, but it's definitely nonlinear. And let's see what happens if we look at our un's. So that, I mean, there are these guys. I don't know. There are the functions that are zero over here, and they jump to one in a space, or precisely these, in prime, and weakly star to the Dirac mass. Those are our, like, heavy sized step functions. Happens to integral f of un. So outside of this interval, so in this region, which is size one minus n, un prime is equal to zero. So we'll just get one times one minus n plus region of size, region of size n, where the function is equal to the square root of one plus n squared. So as we let n go to infinity, this term, which is sort of in the absolutely continuous part of our target measure, whereas this term, so we sort of are letting n go to infinity and n go to infinity. So in the end, we're going to lose this perspective of, like, oh, this thing is really nonlinear. We sort of go off to the tail where this thing does wind up looking pretty linear. And we'll just get... So, I mean, this is a little toy example. I don't mean this to prove anything, but I think it does an okay job of highlighting why I'm about to define this function that I'm about to define. Family for f goes to infinity. This is called the recession function. One thing to note about this, this limit exists. I can tell you, if you want to see, it's because the function that maps t to f of t, c minus f of zero over t, is non-decreasing like you'd actually... This is the key. It's sort of like a trick to see it. So this limit, honest and goodness, does exist. f is going to... f infinity is going to inherit convexity from f. You just sort of do this for every t and let the limit go. One important thing about f infinity is it's one... it's positively one homogeneous. That is to say, if I look at f infinity of lambda times c, lambda greater than zero, this is the limit as t goes to infinity of f of t times lambda times c over t. So I can sort of put a lambda here and a lambda here and get f infinity of c times lambda. So this is the recession function of f and it's basically doing what we're doing here. We're going off and saying, all that really matters at these singular points, like these singular points, the input is blowing up, but the size is shrinking. And so all that really matters is what's going on at infinity and that really only depends on the direction in a certain sense. It's one homogeneous. And one homogeneous function is playing really nice with weak convergence. So this is going to be a somewhat cold style from the palace for people who took Giovanni's... This is in Giovanni's Talonism Variations class, I think. These were Chetniac results. So one of these were Chetniac. This is like a family of results about how one homogeneous function is played with weak convergence. It is one homogeneous and convex. So this is the Radon-Nikiden derivative of μ with respect to its absolute... with respect to its total variation. These are an intervector value of measures. So basically this is saying a measure is like a magnitude and a direction everywhere. If we look at the direction, d, that this is going to be less than or equal to the integral of the... So these positively one homogeneous convex functions are exactly the ones that are going to be lower summing continuous with respect to weak convergence and the weak star convergence of measures. And so, and it takes some work but you can really show that if... that in the correct, the representative sum is continuous. The singular part is up to its Radon-Nikiden derivative. Then we get integral of the absolute continuous part. So this is nice plus the integral of the recession function. So we can actually exactly compute what this relaxation is. And it's just saying in the singular part we need to look at this sort of recession function because we're going to see this behavior where everything's blowing up in the singular part but the supporting site is also shrinking and we're going to see something precisely when these things are... when this thing is when you're growing. So, quick, you know, kelp one exercise in the square of one plus absolute value of c squared and the recession function of f, right? We saw it sort of looks just like an absolute value off that infinity. So the recession function is just the absolute value. So in the case of this particular f, this term here will vanish because this Radon-Nikiden derivative always has magnitude one. And so, in this case, we get integral over omega with square root of one with absolute continuous part squared plus the singular part measuring omega. And so what happens, like, there's something interesting happening which is that if we look at this integral functional that we looked at, right? We're saying, okay, the f of u n's converges to two. But if we actually plug in the Dirac mass on the interval negative one one, this would be two. We plug this in. Well, the Dirac mass has actually continuous part zero. So here we'll get two. And then we just look at the Dirac mass about, you know, the whole set. That'll give us one. So we actually did better than a lower semi-continuity in this case. We actually got the f of u n honest to goodness converges to f of u. So there's something like, boss, like, right? Something's happened here because we had functions that were converging weekly and we plug them in to a convex, like, non-linear function and yet we got continuity and that's not supposed to happen, right? We're not supposed to get convergence. So why did we get it in this case? My best answer to this question is that, like, non-linear functions behaving badly occurs due to things like oscillation and mass escaping. And in this case, when mass is just concentrating, but we allow ourselves to consider singular sets, we catch the mass that might have, like, concentrated on to a singular set. So to make this more precise, we have this idea of, so this function that I was talking about isn't just a cooked up one. We have this notion of area-strict convergence. Measures u n, when we say they'll converge area-strictly to mu in weekly star and we have this functional, f of converges to f of mu. In the case when we actually have this nice convergence, we'll say that f of mu mu n converges to f of mu. We say that mu n area-strictly converges to mu. And somehow this is like a really nice way to characterize that we have concentration in the sense that so this is another Reshetniac type theorem that says if area-strictly to mu, then mu is to rank. But basically every single integrand of this form is going to actually converge. We have it for one thing, we have it for everything of this form. For a whole class of functions. In a certain sense this is a really nice way to capture the fact that our measures might be bunching up in places so we only have weak convergence. But it's not happening because of the other normal example of weak convergence where our functions are doing this and they're going to... So this is actually the problem that causes convexity like convex things to only be lower semi-continuous. Something that oscillates between these two values of convex function and they're weakly converging to their average. You can get something lower if your weak start limit. On the other hand, this is saying that there is no oscillation like if we have one non-convex functional so that we don't have... that we actually get continuity, we get it for all... one convex nonlinear functional, we actually get it for all of them as long as we pay attention to this mass that's going away from the concentration. So this is like the nice thing that I... This is like what I've been doing with my life lately is I said, okay, if I have you some measure on a big set then for every omega the measure of the boundary of omega equals zero. We take mu and we take the normal convolutions. This is actually going to area strictly converge to mu. So this is like we can find a particular sequence of smooth functions where we're going to area strictly converge to mu. That has to say, I mean it's... when we put it a certain way it's like trivial, right? If we want to approximate a measure with smooth functions that don't oscillate too much we might as well pick the standard way to approximate things because we're not going to introduce any oscillation as long as you make sure things can't escape to the boundary. So that's basically my talk. I think I'm like any further... it's already very technical. Any further is just going to get more technical so I think I'm going to collect this. Thank you all for your time. Yeah. So, yeah, saying that if I can find one function as the integral I can save that. Better than convent. It's like a huge space actually that you get. You get like continuous functions. I think you need to work a little bit with what your definition of the recession function is because it may not be well-defined but I think it's like all functions that have a well-defined function basically. You get a huge space of functions that are going to be continuous as long as you... your sequence is good enough that one strictly convex function bangs nicely. It's surprising. It's a very surprising result. But somehow it's really just saying if one thing goes then there can't be any oscillation going on. It would get captured. Yeah, it's like... Yeah, this is like... Yeah, it... because it didn't come originally for the measures, right? It came when people were talking about great U like grad U. There's gonna be like grad U and DSU in there. So this is where you're looking at really the graph of a function and you're looking at like the surface area on this, right? I look at... I look at the graph of U of X and I'm looking at like the surface area on it. Then that's gonna be the surface area element of the square root of 1 plus grad U squared. And then in jumps like if you jump somewhere this is just giving us like a nice little cylinder. Yeah, so it's actually... we look at the sub graph and then we close it. So, you know, you get like sort of the boundary of this cylinder. So that's why it's called the area-striking proteins. It came originally from people thinking about these as the gradients of UV functions, but it's actually well-defined for any measure. So that's just why it's called the area-striking proteins. But this is a fact about measures, this thing. It does nothing about functions yet. Yeah. Beautiful.