 I'm thrilled to welcome everyone here today to the lecture for the Robert E. Wall Award winner. This year, Professor Mark DeMurray. And we're all looking forward to his lecture. Butterfly, the cast theory. But first, I'd like to introduce President Martin Nammick to give a few words. And I just wanted to take too much time but offer first my congratulations. And second, I thanks to all of you for coming out. Because I think one of the things that is always so hard as one who has done research is the question of, well, is anyone going to list? Is anyone going to read? And so I think the affirmation here for us as a learning community, because I think what is so exciting about this award are two things. One, that it really embodies that Ignatian value of reflection, right? The fact that we as an organization, we as an institution, ensure that our faculty have time to really dive deep is, I think, something that we have to continue to maintain. And then secondly, though, is this idea that we're a community, right? We're a community of scholars. We're not members of disciplines. We're not members of departments. We are, in fact, a community of university scholars. And I think this award is a great reflection of that. I think this attendance is a great reflection of that. And that's why I'm thrilled to join you. And I just want to congratulate Mark on his award and say, I'm looking forward to the talk. Thank you. It's now my pleasure to introduce Professor Mark DeMarc. Mark will be a mathematics and English literature at Amherst College. After teaching math in English and took Micronesia for three years as part of a Jesuit volunteer program, he returned to higher education and earned his PhD in mathematics from the current Institute at New York University. He spent three years as a visiting professor at the Georgia Institute of Technology before joining the faculty here at Fairfield University in 2006. Since 2008, he has obtained three three-year research grants. That's nine years for those doing that. From the National Science Foundation and used the funding from the NSF for his current research collaboration. Dr. Wilmer's field of research is dynamical systems and ergodic theory. In particular, he's interested in studying statistical properties of discreet and continuous time dynamical systems for some chaotic behavior. He's going to tell us all ahead. Drs. Diverse's project, Chacy Butterfly's Chaos Theory at Lorenz Attractor focuses on establishing an analytic framework in which to study the long-term behavior of some chaotic dynamical systems. The information from this analysis is central to proving exponential decay of correlations for certain important models such as the Lorenz Attractor and mathematical billiards. Dr. Vermeer's worked intensely with Professor Laverne at the University of Rome in Italy and Professor Laverne. Say it again. Baldy. Baldy at the University of Paris, France to complete this ambitious project examining long-standing and important problems in mathematical medicine. Thank you. So, Christine makes it sound better than I do actually. So, on that note, we should have our refreshments. So, I wanted to thank, I wanted to thank, just say thank you essentially to all of you for coming, to the University for this great award and really the gift of time is something that with all our busy schedules is very precious and it's the kind of gift that keeps on giving too. So, I spent last spring abroad for a couple of months and of course, and I'll talk about some of that project here today, but during that semester I also started three new projects with conversations and conferences I was attending. So, this is the kind of connections that are made and the sort of downstream effects you see from this kind of time spent intensely and research will have downstream effects for years to come. So, I think it's a really valuable, wonderful institution we have here at Fairfield, this wall award. And I hope it continues it but grows. It's a wonderful thing. Okay, so for today's talk, I thought that if I just talked about my research you might get a little bit bored. So, I thought maybe a little bit more gentle introduction to the topic of dynamical systems and chaos theory, a little bit of a historical overview. So, we'll start with what I think is really the birth of dynamics starting with Newton and Newton is, well, I'll say, I'll say what I think of Newton. I think very highly of Newton. So, but essentially this is really the first dynamical system and the first, the beginning of the field I think has to start there. And then I'll work my way up through a few centuries to Lorenz and the butterfly effect which is ostensibly the, well at least the title of the talk. And then I wanna skip out a little bit, backtrack a bit to statistical mechanics which has really happened at the intersection of chemistry and physics I would say back in second half of the 19th century. And it has a lot to offer to the modern theory of dynamical systems. And so, those two streams intersect and my wall word project is kind of at the intersection of those two streams. And so then I'll tell you what that is and I will limit the mathematics to the statement of one theorem. So, it shouldn't be too bad. And that's about the decay of correlations for a certain model called billiard flows. And then I'll talk about next steps. So, why is this important? Why do we wanna know about this exponential decay? What can we use it for? So, that's kind of just a quick overview of the talk. So, let's start with Newton. So, here's a cartoon of our solar system with Pluto included just for sentimental reasons because I grew up with Pluto and I'm reluctant to eliminate it. So, of course, the solar system of Newton's day just had six planets. Here we've got eight and a half. But what Newton was doing is he wanted to study motion, right? And he didn't have the tools to study motion, so he invented them. He invented essentially calculus, which is the notion of derivative, the rate of change, instantaneous velocity, and a way to express that precisely. And then along the way, of course, differential equations, which is just equations about functions and their derivatives. And so, these equations of motion naturally arise in his studies and he founded these fields. And so, here I've got the universal law of gravitation. Well, that's really the right part, right? Or G is this gravitational constant. The two M's are the masses. R is the distance between the two objects and that's the force of gravity, right? The objects exert on one another. And then I've coupled that on the left-hand side with Newton's second law of motion, that's mass times acceleration, right? So, if you do that, then you have a differential equation because you've got R, which is the position of the planet, and you've got acceleration, which is the second derivative with respect to time of the position. So, well, equation involving derivatives and you can solve that equation. And Newton did solve that equation. And so, you get an equation that represents the motion of the planet, right? So, you can know where the planet is at each moment of time, at least in theory, and its velocity, right? So, the importance of this that I wanna highlight is that Newton moved from observational laws, which were Kepler's laws, to something that had some explanatory power, right? That was the big transition that Newton represented. So, Kepler's laws had just made from hundreds and hundreds of observations of the planets moving. Kepler said, okay, look, all the planets move in an ellipse. The motion of a planet sweeps out equal areas and equal times. The square of the period of motion equals the cube, or at least the proportion of the cube of the semi-major axis of the ellipse. Wow, how did he notice that? Well, he did. And so, that's the third of Kepler's laws. But all these things can be derived from Newton's laws. And Newton's essentially reducing this to the action of forces, right? And quantifying them through gravity. So, that's kind of a key theme in this talk will be this sort of trying to reduce observational laws to some more fundamental facts that have some explanatory power. And so, okay, dynamical system. This is sort of the first dynamical system, the solar system. So, what is a dynamical system? It's just really any system you can think of that changes over time. It's a mathematical model of such a system. So, you can think of the weather. We'll talk a little bit about the weather today. It's a nice day outside. Solar system, you can think about fluid flow over an airplane wind, turbulence in a pipe, motion of the stock market, right? Prices in the stock market. These are all dynamical systems. All functions, quantities that you can track over time. So, but they all have some common features. So, there are two things that we are really essential. One is a phase space, which is the set of all possible states of the system. So, in our example of the solar system, that would be the set of all possible positions and velocities of the planets. And then on that phase space, you need some rule for how things move. You need some either discrete time. So, you have a map kind of looking at pictures as a point moves from one place to the next in the phase space or continuous time where you think more of a flow like the movement of a planet continuously in an orbit. So, either one of those, some kind of rule which gives you the dynamics on the phase space, right? So, in our example, the way we would think of this is this position of velocity x of t of some planet as a function of time would be given by the flow, phi of t, which is some rule for moving, right? Of x to zero, and x zero is this initial condition. So, given some initial condition, which would be initial position and velocity of a planet, then the rule takes the planet forward and you know where it is after that. The flow tells you where it goes. So, these two things define a dynamical system. So, unfortunately, while this setup is nice, it can be hard to solve these things. So, Newton solves a two-body problem, the sun planet problem, but if you just add a moon to that, then we still can't solve that, right? So, that's kind of a lot harder all of a sudden. And forget about eight or nine planets, or it's just impossible, right? So, what do we do? Well, after trying for a couple hundred years to solve these things, we decided, well, Poincare, who's a very important figure in dynamics, introduced, formalized a sort of qualitative theory of dynamical systems. So, you can characterize orbits, you can say where they go for the most part, whether they stay bounded, whether the planets crash into the sun, even if you can't solve exactly for the position and velocity of the planet every moment in time. So, this kind of reduced the field, or well, highlighted, I guess, two questions in the field that are really important. So, the first has to do with predictability, right? So, what is the long-term behavior of the system? So, we talk about orbits, right? So, here's an orbit. You start at some initial condition, X zero, position and velocity. And then you look at the whole forward path of this point in your phase space for all time, and that's an orbit. So, for a planet that's moving periodically, that might just be an ellipse. But, for a more complicated, that could be more complicated in bindings around the phase space. So, do most orbits remain bounded? Do they accumulate in one region of the phase space? Do they spread out in a system what we call mixing, if things spread out evenly? So, that has to do with predictability. And the other question is stability. So, if we change our initial condition at X zero, or maybe if we change the rule of the system a little bit, do we notice huge changes in the observed orbits, right? If we start from a slightly different place, do we end up in a very different place? So, those are two questions. They're related, they're not. You know, obviously there's influence from one question to the other. If you have instability, then obviously predictability becomes very difficult. But, those are two of the important questions in the field. So, now it's fast forward a bit to 1961, right? And now computers start to become available. And a meteorologist at MIT, it was actually trained, he almost got a PhD in mathematics. But then World War II happened. He was studying at Harvard in math. And then World War II happened and he switched. He got assigned in the Air Force to be a meteorologist. And then after the war, he decided he liked it so much that he stuck with meteorology. And he went to work at MIT. And he was playing around with some non-linear models of the atmosphere because he thought the linear prediction models at the time were not so good, missed some important features. And he started working on this very simple model of just a two-dimensional atmospheric convection cell. So, you just think of some cell, just a slice of space, just a two-dimensional slice of space. And below, the cell is being heated and above cooled as it would be in the atmosphere. So, this is gonna drive, right? Convection cell, that's what he's trying to model. And so he did some, he had some manipulation, started with some PDs reduced to some system of ordinary differential equations you see here. And the three x, y, z here represent x is the rate of convection, y is proportional to the temperature difference in the horizontal scale and z, the temperature difference in the vertical scale. And you can just see from this diagram, here's some orbits plotted from this system. You can kind of see the convection cell, the kind of up and down motion, right? So, that's kind of nice. Now, this is just the y, z phase space. I'm eliminating x, but you can see it. So, he's running these models. Computers are not that efficient yet. He's just saved some time. He uses some data that he had run and made in a previous run. He feeds it in halfway through and he sees that the end result is vastly different than the first run. And he says, well, the first assumption is, well, the machine is just acting up again, right? These unreliable things. So, he looks at it and he looks at it more carefully. He starts comparing the runs step by step going through the run to see where is the error happening. And he realizes, no, it's not machine error. These two, so the digits were just slightly different, right? The halfway through the printout that he used for the halfway through the previous run only had three decimal places recorded. But internally in the machine, the machine had been keeping track of six the whole time. And so, the difference between letting the machine run on its own from the halfway point to the end or inputting the values manually and then letting it run were those just decimal places four, five and six, right? But, and so, they started with little differences and then they just diverged. The differences got bigger and bigger until at the end of the day, the results were completely unrecognizable. So, he got kind of excited about this and he thought, wow, this is really something great. And he published this in a meteorology journal and so no one noticed for about 10 years. This just didn't go anywhere. But he'd, but he'd, but he'd formulate it, right? So, so the butterfly effect right here. So now I've just rotated the picture of that same, those same orbits now in the XZ face plane and now you can see why it has this little butterfly shape, this, this chaotic attractor, right? And so the butterfly effect is this sort of this idea that small changes in initial conditions can lead to very different effects downstream. So, the flap of a butterfly's wings in, pick your favorite place. Texas creates a tornado in wherever. China, India, Kansas. Take your pick, right? It doesn't matter. The point is that it's tiny, tiny changes can create large effects. So, this basic idea in dynamics had been around for a while since the Poincaré, remember who was studying the qualitative ideas of dynamical systems, Poincaré actually showed that in the solar system, you could have orbits of planets that were non-periodic and remain bounded and so they could have some very complex orbits in that, in that setup. But, the thing is that people thought it had to do with large systems, lots of degrees of freedom, lots of different kind of bifurcations you can get as you change parameters in a system. And so Lorenz showed that no, actually it could occur in a very simple 3D model, nothing fancy, fixed set of parameters and you're gonna get this kind of very sensitive behavior in initial conditions. And so for him, for him this sentence down below, this is a quote from one of his papers and it was very important because for, so that says an approximate knowledge of initial conditions does not yield an approximate solution. For a meteorologist, that's all you ever get or approximate knowledge of initial conditions. So for him it was essentially saying that none of his prediction models would ever be much use beyond about four or five days. So this propagating very quickly, he was able to quantify these errors and he saw that the rounding errors essentially grew to dominate the solutions within about five days time. So let's sort of put a limit on what he thought whether predictions could achieve. So this led to Cass theory. It took about 10 years to get about 10 years to sink into the mathematical community and actually some, he had not sent a title to a talk and someone made up a title for him with the word butterfly effect in it or butterfly flapping its wings and this actually made sort of drove a little bit the attendance and interest and suddenly people started noticing. So it helps to have a capitan title every once in a while. So I'm skipping the work of a lot of great people here. I mean, in the late 60s, Mael was also working and he was working from, he was a mathematician working from a very geometric point of view. Mandelbrot who was here at Yale, close by, right? He's working in fractal geometry. He's discovering fractals and fractal dimensions. So these are all related to these sort of chaos theory. And so it's a whole branch of mathematics that's born then in the 70s. And, but it's more of a philosophy, right? It has no still to this day. We don't agree on a definition for mathematics, for math, not for mathematics, for chaos theory, for chaos because it just appears in too many different situations, right? And so if you try to put a definition on it, if you try to put it in a box, then you have some other example out here you wanna include that really has kind of the same features. And so we don't. So we have no definition for chaos. It's just an idea to organize a certain set of thoughts, a certain point of view. But certainly any time we talk about a chaotic dynamical system, it will have some complexity, some very great complexity in the dynamics. And sensitive dependence on initial conditions is always one of the criteria, right? Also a dense structure of periodic and non-periodic orbits. And this has to do with sort of mixing order and disorder. I mean there's actually a lot of order to a chaotic system. And more on that on the next slide. And then hyperbolicity. So this is something that we'll see in models, other models we'll talk about today. Hyperbolicity means that there are some directions in the phase space where you see a lot of expansion and a lot of contraction. So you start two initial conditions very close together and if they're aligned in a certain way you'll see they fly apart really fast. But if you put them in another direction so that the line connecting them, if you will, is going in the oriented in another way, then they're actually pulled together very fast. And so this pushing and pulling in the phase space creates a lot of folding and creates a lot of that sensitivity and a lot of that complexity in the dynamics. And so that's a well-defined thing. Hyperbolicity is a well-defined thing and it's one of the mechanisms that causes this sensitive dependence. So a lot of times we use the word chaotic chaos, right? I have two young kids, chaos. And it's kind of random, right? Their behavior. But chaos theory is not about random systems, right? It's about deterministic systems. So this is something in which maybe the English usage of the word is a little bit different from the mathematical usage of the word. So I wanna emphasize here that chaotic dynamics is deterministic and it is not about randomness in phenomena. So here I've got a picture from Jurassic Park. Maybe you recognize it. The mathematician Malcolm is trying to explain chaos theory to the paleontologist Ellie and he does a bad job. So if you've ever seen this movie, this is a bad explanation of chaos theory. So I don't know if you remember this moment, but he takes some water droplet and he pours it on her knuckle and it runs sort of to the right side one time to the left side the other time. And she says, Ellie says, oh, we've just dropped it in a slightly different place. And he says, no, no, no. I dropped it in exactly the same place. Moving vehicle, he's just pouring by hand. Sure, he dropped it in the same place. It's the microstructure in your skins that sends a different, no, it's not. She had the right explanation. He dropped it in two different places and that's the whole point of sensitive dependence on initial conditions. There were two different initial conditions. It's a deterministic system. So if he had dropped it in the same place twice with the same initial condition, you get the same outcome. The whole point of chaos theory in a deterministic system is not randomness. It is sensitive dependence on initial conditions. So the paleontologist is correct in this movie if you ever see it. No offense to Jeff Goldblum. Okay, so, but still there are parallels. Now I'm gonna contradict myself. So there are parallels to probability theory. There are certain aspects of katech dynamical systems which feel very much like a probabilistic system. So let's go back to this butterfly shaped Lorenz's attractor and see what we mean. So we can do a very simple thing here in this attractor. You just follow an orbit around. You go around these curves and every time you say go around the left loop, you write down a zero and every time you go around the right loop, you write down a one. And then you just get a sequence of zeros and ones. That represents the path you took around this attractor, like this fractal set. So you get some sequence. So complexity. So what kind of sequences do you get out of this? It turns out you get all of them. Every single sequence of zeros and ones that you can write down is realized by some orbit or maybe more than one orbit in this attractor. So every infinite sequence. And this comes back also to that dense structure of periodic and non-periodic orbits. You can write down a periodic sequence like zero one, zero one, zero one, zero one forever. And that will be maybe a periodic orbit that goes around this attractor. Or you can write down a completely arbitrary non-periodic sequence, say the binary expansion of pi. And there will be an orbit that realizes the binary expansion of pi on that attractor. And that's the complexity. There's everything is represented, right? So every single sequence that you can think of is dynamically represented. Sensitive dependence. So this just means you can't predict the next digit, right? So I give you finite information, any sequence of digits that you want, but finite number, and you can't tell me what the next one is or I can't tell you what the next one is. Because there are orbits that have that sequence and then followed by a zero or that have that sequence and followed by a one and you don't know which one you're on. They could be very close together. And so that's the sensitive dependence. So it feels a lot like the randomness of a coin toss, right? Heads, tails. You give me a string of heads and tails. And then I say, okay, I got 80 tails in a row. What's my next flip gonna be? Well, it's 50-50 again. You don't know, right? And it's the same thing with the windings of this loop around this chaotic attractor. You just don't know. The sensitive dependence is so great that you could get either result, even though the system is deterministic. Okay, so that's sort of the orbit-wise and trajectory-wise approach to development of chaos theory, right? Up to, I guess, about the 70s. So that's backtrack now. Let's go back to the end of the 1800s and talk about physics and chemistry. People are studying ideal gases. And Boltzmann, together with Maxwell and Gibbs, are putting together a theory of ideal gases called kinetic theory. And what they're postulating, what Boltzmann is postulating, is that actually gases are made up of lots of little atoms. And all the phenomena that we observe, like temperature and pressure, they are all just a result of how these atoms are colliding in space and what attractions they exert on one another, how they collide, right? How their collisions go. And with each other, and with, if you're having a container, the walls of container, for example, for pressure. And all of these macroscopic quantities should just be able to be explained by this motion of these subatomic particles. But of course, this was a very controversial theory at the time because the existence of atoms was not accepted by the whole scientific community. So it was a theory. And Boltzmann was, particularly, it took it kind of hard. And he was insulted on some occasions. And so he's famously, he was giving, he wanted to give a talk at a conference. And he was denied the opportunity to speak in a physics session because what he was doing was not physics. So he was allowed to speak in the applied mathematics section. Well, the mathematicians, they'll listen to you, but you know, you're not really doing physics. So that was the state of affairs for some time. And it was, it was, of course, in the end, vindicated, as we now all believe in atoms, or at least I think we do. And so this kinetic theory turns out to have a lot of explanatory power. So let me just give you an example here. This is Furrier's law of heat conduction. So if you just take a wire, you know, and you just think about the two ends of a wire at different temperatures, then Furrier's law says, it's just this equation here in the middle that the heat flux is just the density is just proportional to the spatial derivative, the derivative of the temperature, right? And so if you, in a very simple case, if you just fix the two ends of the wire at two fixed temperatures, T1 and T2, and then the heat flux density is just a very simple expression, just the difference of temperatures divided by the length of the rod. So Furrier's law just says heat goes linearly, right? You fix this one at zero degrees, this one at 100 degrees. And what do you see on the rod? In steady state, you just see a linear function interpolating the two temperatures, zero and 100. So that's a very simple law. But what do you do in that law? Well, you're taking a derivative of temperature. So you're assuming all these quantities are continuous. They're smooth, they're differentiable. And what does the kinetic theory say? It says nothing is smooth. It's all these little atoms smacking together. And what is temperature, but just the average kinetic energy of all these molecules? Some of them are fast, some of them are slow. On average, there's something, and that's the temperature that you measure, right? So this is a very different way of looking at it, and which is not smooth in some sense. It's not saying that the underlying structure is a smooth one that you can take a derivative of. And so one of the goals of this field, and one of the things that they were concerned with very early on, and we're still concerned with, could we, for example, derive Furrier's law from the equations of motions of the microscopic atoms? We still can't. This is an unsolved problem. We cannot do it. Nope, too complicated. But that would have the same kind of explanatory power as, say, Newton's law of gravitation, right? You have some observed phenomenon. You say, ah, it's really a result of this force, and this force now explains all of those observed laws. Furrier's law, at the end of the day, is an observed law. And if you want explanatory power, you should go down to the microscopic level where these things are happening. One interesting feature here that they had to deal with early on, and I think they've done it pretty convincingly now. The microscopic dynamics are reversible, meaning that if a particle bounces around going one way, you can just turn around and bounce it back the other way. But the heat flow, right, is not observable, is not reversible. So Einstein, one of Einstein's big contributions was to put, give a probabilistic model for Brownian motion. So Brown, so back in 1820, Robert Brown had observed these little dust particles vibrating in liquid and water, right? And why, why are these things vibrating when the water is still, right? So it, well, of course, because they were being bombarded, the kinetic theory said, well, they're being bombarded by all the atoms in the water, and so they were just constantly vibrating as a result of these collisions. And so he, so Einstein came up with a probabilistic model for this Brownian motion, which actually has very wide applicability. He not only solved the heat equation using, or derived the heat equation using this probabilistic model, but it's been applied to tons of things when you have random stochastic processes. So financial mathematics is a field that is founded essentially on stochastic calculus, which is all about Brownian motion. So you have applications to things that you might not even think of at first. Turns out to be really useful. But if you assume that the underlying dynamics are deterministic, then we do not have a derivation of Freyja's law. And it would be nice to have one. So along these lines, Sinai, who's another mathematician working in the 60s, it's about the same time as Lorenz is discovering chaos theory, or proposing this butterfly effect. But Sinai is in Moscow, right? So he's part of the Moscow School of Dynamics, and there's not much contact at that time between the Russian School of Mathematicians and the mathematicians in the US. So they're actually kind of working on parallel tracks without really talking to each other. And they didn't until the 70s, until they actually both realized they were working on chaos theory independently. But so here's a mathematical model that's a deterministic one that Sinai proposed. So it's just a billiard. It's just this kind of thing that you would see in some geometric optics course. It's just a point particle that bounces around some table here with fixed obstacles, like if you're playing pinball with a ball, it's just a point, and it goes bouncing around. So the obstacles are fixed, and when the ball moves in straight lines until it hits one of the obstacles and then it collides, it reflects the law of, well, the reflection is elastic. So it's the law of reflection. The angle of incidence equals the angle of reflection. And so this is a continuous time flow. You can turn it into a discrete time map by just saying, look, nothing happens between collisions. Why don't we just record the position when the ball hits the obstacle and the angle, say, there with a normal vector to the boundary. Then just record position and angle at a sequence of collisions, and you have what we call a discrete time map and billiard map. So you've taken kind of the continuous time out of the problem. So these are two related dynamical systems, one continuous time, one discrete. So Sinai proved that this models the billiards if the boundaries of these obstacles are convex, he proved that they were hyperbolic and so actually they exhibited chaotic dynamics. And the hyperbolicity, the expansion, the phase space, you can see here in this figure, these diverging wave fronts, right? So they start out a bunch of wave fronts really parallel to each other. They hit that first gatherer because of the convexity of the boundary, their angle, the difference in angles is increased. They hit the second one increased again and so that wave front will spread, right? And as it goes, it spreads more and more and that's the mechanism that drives the hyperbolicity and the chaotic dynamics of the system. And improve that these things are mixing and I'll say a little bit more about mixing on the next slide, but wasn't able to quantify the rate of mixing. So how fast do these things mix? And for the discrete time map, actually Lysang Young, who was my PhD advisor at NYU, proved in 98 that the discrete time collision map does mix exponentially fast, but the flow was open, right? And the flow is something we'd like to do for the Fourier law. So that is actually what the goal of the Wallaward project was, was to show that the billiard flow for these kind of dispersing billiard tables mixes as an exponential rate. So I will tell you what these words means in a moment that are in red, but first I want to just draw attention to this idea, which is this combination of the dynamics in the one hand, the chaotic dynamic system. This is this dispersing billiard table. On the other hand, the statistical mechanics, right? That was coming in, talking about average behavior, behavior of large clouds of particles. And so what the contribution there is, the intersection of those two things is, is that it turns out that even though path-wise, chaotic systems are very difficult to follow, on average, the behavior of particles is very regular. So let me show you what I mean by that. And there's this little video. Let's see if it works. Here it is, awesome. So there is some way that I can make it larger, right? Okay, so these are, okay, it'll reset and start over again in a moment. When I get, when T gets to four, it will reset. So this will start as, here it is, a clump of particles all very close together moving in the same direction, right? Across the two-dimensional billiard table. And at first they all have the same direction, but you see as they hit these dispersing boundaries, the wave front begins to expand and now you see it starts to go and you see these diverging wave fronts, you see the folding in the wave fronts, and you see things start to mix, right? Particles are starting to become more evenly spread across the table. Now the different color coding is the direction that the particle is moving in, because this is really a three-dimensional flow. So we can only represent two dimensions here in the picture on the table. So the third dimension is represented by color. And so the different, the arc of the circle you see around the outside is coded by colors. And when a particle is going in this direction, then the color lights up on that segment of the circle, right? So when you have particles going in few directions, you see just a little bit of the circle lit up, but as particles go in more and more directions, you see more and more of that colored circle around the perimeter light up. And so that's trying to represent all three dimensions of the flow in a two-dimensional picture. And you see when you get about to three, you see that the system is a set time equals three, you get the system is more or less mixed. The two spatial dimensions are more or less uniformly mixed within the table and then the directions are more or less all represented around the outer part of the table. And so at first, the particles are doing different things and there's some structure to how they go, but by the time you get up to T equals three, then they're, we'd say that the system has reached equilibrium. And so from that point on, it's kind of boring to look at it because it doesn't do anything else after that. It's always just like that, right? For this, so for a while, you get to see some changes, some differences. But after about three, it's just gonna look like that. You're gonna see a colored circle and some dots going back and forth. That's equilibrium, that's it. I'm not gonna do anything after that. So that's what we mean by very orderly statistics. Individually, these orbits are all unstable, but as a group, they have very regular statistics. So now how do I get back to my presentation? All right, so let's make precise what we mean by correlation. So here comes the math part of the talk. All right, so you can talk about, so suppose we have some flow on some phase space X and this observable is gonna be a function that assigns to each point in your phase space some number. So you can think of it as temperature or pressure at that point in the phase space, right? Some something, some measurable quantity. So that's a function on the space and we can allow that function to change with time if we just compose the function with the dynamics. So here F is now applied to the flowed point, VT of X, so not the value of F at X, but you let X go for some time and then you measure F at that point, right? So that's gonna now that function, the value of that function change with time, right? And then if you want to know are these points mixing around, you can say how closely does the original function F resemble the flowed function, F composed with the time? So for some way to look at this, we can draw on some probability theory for some inspiration. So you have two random variables, think of two functions as two random variables. If they're not correlated, we say that their expectations, their average value, E just stands for expected value or average value of the random variable. They decouple according to this equation, meaning that the expected value of the product is just the product of the two expected values taken separately, right? So that's also independent. If the random variables are independent, they're also uncorrelated. The opposite is not true, but for the most part, you can think of them as independent, right? Okay, so we're in a deterministic case. We don't have a random system, right? So if I know X, then of course I know VT of X because it's just a deterministic flow. And so that means if I know the value of F of X, I have some information about the flowed value. So they are correlated, these random variables. And so we don't really ask for the equality to hold in a deterministic system, but we can just look at the difference, right? To find this correlation function. Take the difference between the expected value of the product and the product of the expected value. On the one hand, the function before you introduce dynamics. On the other hand, the function composed with the dynamics. And you say how close are those two things to being correlated? What's the value of that difference? If it's small, then they're almost uncorrelated. If it's big, well, then they're not, then they're pretty, then they're strongly correlated, right? And so then you can ask, how does this difference change over time? And so your first question becomes, does it go to zero? If it goes to zero, then that's that asymptotic, that's uncorrelated, right? Because if that difference is zero, then the two quantities are equal. And then, so asymptotically, they would be becoming uncorrelated. So that's the definition of mixing. We say a system is mixing if observables become uncorrelated asymptotically. So if the correlation function goes to zero. And then once you know it goes to zero, then the question becomes at what rate? Is it polynomial? Is it exponential? What does that function look like? And the stronger your chaos is, the stronger your hyperbolicity is, the faster the rates of mixing. And so the most strongly chaotic systems will have exponential rates of decay. That's about as fast as you can expect things to go. Remind me later, yes. Okay, so what's the math in the, in proving that the billiard flow is mixing? Well, the thing is that we do have hyperbolicity. So we have strong expansion and contraction. But it's all gonna happen, unfortunately, in the plane that's perpendicular to the flow. So here I've drawn one of these dispersing wave fronts. There's the flow direction perpendicular to it. And in the plane perpendicular to the flow, you've got this stretching and pulling, this hyperbolicity. But the flow direction is totally neutral, no stretching and pulling. And so you can have hyperbolic flows in which the flow just separates into layers and you don't have mixing at all. All right, even though within each layer, there's a lot of mixing, but across layers you don't. So hyperbolicity is actually not the mechanism that we use to prove, well, not the only mechanism that we use to prove this. We needed something else to tell you that things don't separate into layers in the flow direction. So here I've drawn, I've made a little diagram. So we have to actually use a little differential geometry to get this proof to work. And you've got here is essentially the plane perpendicular to the flow. The flow here is T going into the board or into the screen. So here's your contracting direction, WS. W is for unstable, it's the expanding direction. Then you come across stable, unstable. And you can make these loops in your face space going around these stable and unstable directions. And when you make that loop, go stable, unstable, stable, unstable, you hope that you don't end up back where you started. Because if you end up back where you started, you just created a layer and then there's no mixing. So you better miss. But not only do you have you better miss, you're gonna have a gap there. And if you want a rate of mixing, you have to be able to quantify the size of that gap. And so one of the estimates that we have to show is actually that this gap is proportional to the height that you slide down. And that allows you to quantify exactly how much the flow misses being forming these layers by. And it's a crucial step in proving this exponential decay of the correlation function. The other part of it is that that was a nice smooth picture with nice curves. Actually, things are not that smooth for a billiard. You have these bad singularities for billiards. So you have these dispersing wave fronts. They hit tangentially, one of these, here's a scatterer. So here's the tangential trajectory that goes through. Below, things that hit reflect upwards. Above, things don't hit. They keep going straight. And so you see the wave front folds. And you saw that also in the video I showed, the folding of the wave fronts. So this is actually very, it's bad, the derivative there blows up, becomes infinite. And that folding creates a lot of problems. So that smooth picture that I showed you is a little bit of a lie. In fact, you have to do stuff because you have all kinds of gaps in the loops that you can make. But anyway, here's the theorem I said I would show. At the end of the day, it works. This is a joint theorem together with Vivian Baladi and Carl Angel, who's in Paris, Carl Angel, the Liberani in Rome. And it essentially says what I've been talking about just that if you have a dispersing billiard flow, you have exponential decay for the correlation functions. And so you have some single constant alpha, which gives the exponential rate, the constant C in front for those mathematicians in the audience just depends on the C1 norms of the function. So if you stay in some C1 ball, then the decay is uniform. Okay, great. So you got the results, exponential decay. Now, why did you work so hard? The paper is something like 140 pages long, so why that's the first question, right? Maybe I should have asked that before I started. So, well, let's go back to this model for heat conduction. So here's one model. So you have some kind of tube, which you can think of being connected to some heat bath at both ends or something. And inside particles bounce back and forth off of fixed obstacles. And you can think of the different colors as maybe different speeds or different energy levels, right? And they go bouncing around elastically. And you'd like to show somehow that the energy profile in a tube like this is linear. So in other words, a Bayes Fourier law. So can we in a model like this derive Fourier law? That would be nice. Yep. And if we're going to do this with some chain of cells, and you notice that this is just some pattern of scatterers that's repeated over and over again, then really maybe we should understand the dynamics of a single cell first before we try to understand many of them strung together. So here's a picture of a single cell. It's open at both ends. So this is an example of what we call an open dynamical system. Mass and energy is allowed to escape. It's not conservative, right? So what kind of equilibria are possible in this kind of system? Of course, long-term equilibrium, everything will just escape, so that's gonna be a pretty boring situation. But what you do is since you're thinking from the point of view of these statistical properties, you're thinking about probability distributions flowing forward in your space. And you can condition, you can look at conditional probabilities and condition on not having escaped yet. Where do particles go before they escape? And it turns out that there's a big difference between exponential mixing and non-exponential mixing in this point of view. So it turns out that equilibria are much nicer if you have exponential rates of escape. The possible equilibrium in the system are much nicer and much more familiar to us if we have exponential rates of escape. If we have sub-exponential rates of escape, you have sort of very anomalous behavior happening. And this is another couple of papers recently I have with other co-authors that look at open systems sort of just from an abstract point of view and try to classify the different equilibria possible. And so where the exponential decay for the billiard system comes in is to say that since we have exponential escape, we know the candidates for equilibria what they will look like. And so we have a hope of proving some kind of convergence here in the single cell. So the program going forward is really just that. Start with a single cell, see if you can prove convergence to conditional equilibrium at some exponential rate. String together a bunch of these cells. Of course you have to specify some energy interaction. The physicists in the audience are gonna be angry with me because I haven't talked about how these things exchange energy. I know. And so fine. And then you try to show that some, you go to some intermediate metastable state and then you take the limit as a chain goes to infinity and you hope to prove Fourier's law. So that would be a long-term goal of this project. That's it. Questions or refreshments, I guess. Oh, Matt. Sometimes it's hard to know whether the infinite length system is an easier one for the finite dimension, but the one cell is easier. Why'd you pick the one unit cell? It seems like everything's gonna escape right away and it'll be done. Well, I mean that is maybe just too simple a cartoon. You could think of the one cell as having a little bit more geometry or a little bit smaller holes on the side so things didn't escape right away. Since you condition on not escaping, you sort of divide by the remaining mass at each time step, you could ask for, well, I mean you will converge actually to a probability density which will tell you the asymptotic distribution of mass or energy, whatever you keep track of before it exits the system. So I agree that here it might not look that interesting as the holes are so massive, but it does look more interesting in general if the holes are smaller compared to the size of the cell. And since you're conditioning on not escape, you will converge to a probability distribution because your sequence has mass one the whole way so the limit will have mass one. Yeah. So I have a question. So you mentioned that the collision function will decay exponentially. We do have some other systems collision function never decays as a constant. Other than these two classes, do we have a case the collision function will decay like the current normal, right? Yes, possible? Yes. See. I'm an optimist so I brought with me today a whiteboard marker in case there was a whiteboard. There is not a whiteboard. Okay, yes, so I could draw you a picture. So I won't draw on the screen but yes, there are billiards systems actually. I mean, there are many examples but in particular in the class of billiards, I've been only talking about billiards with dispersing boundaries so the things bouncing off the outside. But if you imagine for a moment something with a concave boundary, say like a stadium, like a football stadium, which has like two semicircles on either side and then two straight lines and you have things bouncing around the inside of those focusing boundaries. Then you have a mixing system that has polynomial rates of decay of correlation. Yeah, so almost any time you put a focus in boundary you will have polynomial rates. Yeah. Are you accounting for the particles possibility to bounce on each other or just some sort of stationary object? Oh. So in these mathematical billiards the particles only collide with the boundaries. I know that I draw points that look like they can hit each other but actually the situation is much worse if you ask for a hardball discs to actually hit each other. That's totally unsolvable mathematically. Yeah, we don't even forget about it. So recently a very big paper came out about three years ago, about 2014 and it proved ergodicity for the hardball, the bouncing hardballs, assuming that all the discs have the same mass. So that is a pro, so ergodicity is a little bit less than mixing, right? It just means that typical trajectories visit the whole phase space but it doesn't mean that everything distributes equally like you saw in that video. So we're much further behind on the program for what we call that hardballs or colliding discs. The billiards because the point particles can't interact with one another then we can actually solve that analytically. And so we're in a state, I mean that's just where we are unfortunately, yeah. So then of course the physicists would say like, well okay but if your point particle can't hit each other how do they exchange energy? So then there's all kinds of models proposed like make the disc in the middle spin or they interact, they exchange energy through the boundaries then, that's how they do it, through their collisions with the boundaries. Yeah, this stuff is all in this infancy. I'm sure a couple hundred years from now they will look back and be like, those guys are, they had no clue. And I'll look at all that we can prove. So if you're using this to model heat transfer in a rod, what do the point particles represent and what do the walls represent? Now you're asking me to, I mean you're not gonna get quantum on me, right? Cause if you go quantum I have no idea. So to me, this could be electrons bouncing around carrying energy and then you have some sort of fixed structure of atoms of the metal in the rod. That's, but I know, I know then I've got electrons as point particles. So yeah, so this is a very classical notion of heat transfer. Something like that. Something like that. Electrons are bouncing around and atoms are seeing things. Yes, yes. I mean, yeah, if this were a wire, yeah, I mean you could make this like some tube in which case you might have some gas that's heated and the bouncing things would be the molecules of gas and the fixed things would be some microstructure of the tube, you know? So these kind of models, I mean pictures look a little bit different cause I've got these round things in the center. But if you look at little models from like chemical engineering, actually some models, some of the applied mathematics, you have these like nanotubes, right? And so you have some sort of models of microstructure of the tubes with little irregularities along the sides to describe how the information you want to pass through the nanotube is going to hit the boundary and bounce around in that. So there are different interpretations of that. This is just a cartoon. Yeah, Dennis. I was curious to have a matter of question. A lot of people I study that carry on and work on mental thoughts and stuff like that. So it's an undecidable thing. It's fundamentally, you know, undecidable. And so I'm curious given what you just said, you know, a hundred years that we'll look back on this. So do you, it's a really a question, it's almost kind of a religious question, to be quite honest, but you say, is there a sense in which you have faith that this will be able to be solved? Yeah. Or is it something that is fundamental? He's kind of a philosopher, too. Well, I'm just kidding. It's more of a, I guess it really is a matter of like, is it a matter of faith? And I think it probably is. But is it your, at least, desire, your hope that this will be solved in two, three, four, and 500 years? Yeah, I think it'll be sooner than that. But I think, yes, I think that I do think that this, the direction that I see this branch of dynamics moving in, I think we're able to handle more and more complex systems. There's things that are taking decades, on the order of decades, to come down now those barriers, but they are coming down. I don't see a theoretical reason why this type of analysis would stop at the moment. Now it is true that the complexity of this analysis is increasing as we go. So it's, you know, you also run into the fact that that the fields get more and more specialized as the types of questions and the types of proofs we write. Try to proceed from one question to the next. So where we, you know, there was a time, not that long ago in mathematics, when you could contribute to several fields of mathematics. Right? And the people who can do that now are fewer and fewer because the questions get so much more technical. And so there is an increase in complexity to what we have to study. But I think that this, what I am proposing here in this program, I think this is doable. And primarily because it's unrealistic enough to be doable. So meaning that it's not physical enough to have those kinds of obstructions yet. So you see, I am assuming point particles. And that is, I think we can do it. If we go to more and more physically realistic models, then we can do less and less. And maybe ultimately there, I mean, ultimately there you do come against undecidability questions. I mean you do have an uncertainty principle actually when you get down to the physics. Darn it. But it is there. So eventually when the math models get realistic enough, we will lose some precision. Okay, maybe it's time for refreshments now. Thank you again. Thank you. Thank you.