 The College of Science and Engineering Seminar is here and it's the same group of people. Today's is a little different in that, and I won't speak much about Friday's event, but we have a very well-known business visiting Armenia, visiting AUA, visiting our president, coming to town over the next few days. And given the topic area, and given our own what's called collective areas of expertise, given the gas, that's a kind way of putting it, maybe, to myself and to everyone here, or maybe not everyone here, some of you I know have a physics background. But given that, we thought we happily accept our president Polotian's offer to give us a pre-lector lecture or pre-seminar seminar. So today, the whole purpose of today, and if I understood correctly, will be kind of more informal, maybe a discussion, maybe some questions and discussions throughout President Polotian will clarify if I'm saying anything wrong. But the idea is to kind of bring all of us collectively up to speed on statistical physics and some of the sort of very cutting edge, and even from what I'm understanding now, controversial areas in science that this very big name comes to tell us who is visiting us is we proponent of. So I think it should be very interesting. And again, I know you guys are very busy with classes and all, and two additional seminars. I'll just say that we are also busy. And the president is maybe the busiest of them all. So I think it's great that we can all find time for this. And I think on behalf of all of us, thank you, President Polotian. And I don't know if that's a good enough introduction. Yeah, no, that's fine. Thank you very much. Yeah, I'm happy to discuss this. As Adon just said, the speaker tomorrow really is very well known internationally, especially in the field of physics and especially in this interface between thermodynamics and statistical physics and how this connects to complex systems including, you remember the title of this talk? It's complex systems in the natural systems, social systems, computer systems, artificial systems as well. Things that thermodynamics and statistical mechanics have traditionally never been applied to. And so it is controversial. You might think that in the sciences everything is very objective and people just look at results and there's not room for personalities and debates and things like that. In this area there is. What the tomorrow speaker is doing, his whole program of research is very controversial. And there's a lot of interest in it throughout the world. I'm going to say that they're on the order of thousands of papers published on this. And interestingly, this is one of those things that is catching on. There's a lot of papers in Europe. There's many, many, many papers from Latin America. This is something that has captured the imagination of Latin American scientists in particular. And elsewhere as well, in Asia as well, there's a list of countries that the papers are coming out in the so-called Salas thermostatistics that he has been championing. And it's from all over the world that it's an international effort. And proportionately, there's fewer papers on this coming out in the U.S. which I find interesting and I don't really know how to explain it. But I'm curious just because this does, number one, this is mathematical to some extent. It's unavoidable. It involves, the mathematics that it involves is really no more than things that you would get in a calculus sequence. In particular, multivariable calculus is involved in this. I assume you all have background in calculus. You're all, for the most part anyway, had engineering degrees coming into this program. I'm going to assume it, but I'm not going to assume it so much, but I don't remind you of a few things that should be coming out of the talk. The second thing that it involves is thermodynamics. And this, I don't necessarily assume that you have a background here. Who here has studied thermodynamics before coming to the U.S.? Okay, so all right, let me spend more time giving an introduction to that. The third thing that involves is statistical physics. And in that area, I'm going to assume that you don't know anything about it. What are you doing on that? How many of you have seen statistical physics coming in? So, okay, my assumption is correct. And then I'll talk a bit about how Thomas's generalization of this. So that's the outline. I'll talk a little bit about multivariable calculus. And just to remind you the two or three things you need to remember for this. And then thermodynamics, then the usual Boltzmann Gibbs thermostatistics, which has been for the past 150 years how everybody has done everything. And then Thomas's generalization of it and why it's so controversial. Okay, let's start with the following thing that I hope... As soon as I say this, maybe you'll remember this from multivariable calculus. Suppose I have a quantity z and it depends on two things, x and y. And I ask, how can z change? If I change x, there's only two ways you can change it. I could change x or I could change y. And so the change in z, the little differential change in z, is the rate of change of z with respect to x times the change in x plus the rate of change of z with respect to y times the change in y. There's only two things, z depends on x and y. The change in z could be due to the change in either of those two things. And so the total change in z is just the sum of those two things, right? You all remember what the partial derivatives are. That kind of makes sense to you. If I've got a variable like pressure that depends on volume and temperature and I change the volume a bit and I change the temperature a bit, part of the change in pressure is going to be due to the change in volume, part of it's going to be due to the change in temperature. Is this okay? Yeah? And what that means is if I have one differential that's expressed as a sum of quantities times two other differentials, I can interpret the coefficients there as the rates of change. So if I see dz as a times dx plus b times dy, I know that a is the rate of change of z with respect to x and I know that b is the rate of change of z with respect to y. This, in a calculus course, is called the multivariable change. That also means now sometimes books will go a little bit further and they'll say well this is the rate of change of z with respect to x at constant y, that's what that subscript means. And this is the rate of change of z with respect to y keeping x constant at constant x. Okay, but that's kind of understood. When you take a partial derivative, you treat the other variables as constants, right? Okay, so far so good? Okay, the next thing that you should try to remember is how to compute a maximum or a minimum. This is like an optimization theory. Suppose I have some function z of x and y and it's equal to, in the example I'm giving is x squared plus y squared, right? And somebody asks you, find the minimum of that function. What's the smallest value that that function can take on? What is it? Zero. Zero, where is it? That's zero, zero, good. Okay, so how would you find that? Well, I could take the derivative of z with respect to x and it's a 2x, the derivative of z with respect to y and that gives me 2y. And what I do is set both of those derivatives equal to zero. That gives me the location, x equals y equals zero and then I could plug those in there and find out that the minimum is zero. Okay, because at the minimum, the thing isn't changing very much. I can change a little bit in x and a little bit in y but at the minimum it's not changing z very much and that's why I can set both of those equal to zero and find the minimum. Okay, so far so good. One, let me just give you an example. One last thing. This is the last thing I'll remind you about from multivariable calculus. Suppose that I have a function and I want to minimize it or find its extremum, maximum, or minimum and I have a constraint. Okay, so once again I'm going to take the function z of x, y is equal to x squared plus y squared and I want to find the minimum of that but I want to find the minimum subject to a certain constraint and the constraint is going to be it's got to lie on the line x plus y is equal to two. Okay, so I can't accept the answer is zero is zero. That doesn't work anymore because it doesn't satisfy the constraint. Okay, so how do I do that? Okay, so let me just show you geometrically what this looks like. That's x and that's y. The contours of constant z are circles, right? With a minimum at the origin. But the constraint says that the solution has to lie on this straight line. Where is that line? x plus y equals two is a line that goes like this. It intersects the x-axis at two zero it intersects the y-axis at zero two and it's a straight line that goes right through both of them. So now you know where the minimum is going to be from this geometric picture you know it's going to be right about there somewhere. It's not going to be at the origin anymore. How do I find it? Okay, so there's a couple ways you could do this. One way is you could solve this for y, right? So y is equal to two minus x and then plug that in there and you'll get a function of x only and minimize that. That's one way to do it and it's a perfectly good way to do it. The trouble is that sometimes the constraints are difficult to solve for one variable or the other and you can't easily solve for it. Is there a better way to do it? And there is and I'm not going to tell you why it works but I'll tell you what it is. The other way to do it is to say that that's your objective function that's the function you're trying to minimize to find this thing here to be the constraint function. Okay, so that's constraint function and this is the objective function and the trick is the following. I'm going to define a new function u to be the objective function plus a constant times the constraint function and now the trick is, in this case where is that? In this case that's x squared plus y squared plus lambda times x plus y. The trick is this. Minimize this function, right? So take the partial derivatives of this thing and set them equal to zero. So if I do that, u dx is 2x plus lambda u dy is equal to 2y plus lambda and I'm going to set those equal to zero. Now I've got a problem. I've got two equations in three unknowns. Okay, what's the third equation that I need? The third equation is the constraint. So now x plus y is equal to 2. Now I have three equations in three unknowns. Solve it. It's easy, but I'll solve that. And what you'll find is that y is equal to 1, y is equal to 1 and lambda is minus 2. That's the location of the middle. So all you have to do is take the objective function add a constant and this constant is called a Lagrange multiplier. Put them together, minimize that and then throw in the constraint equation and you're done. Okay? And that tells you how to find an extremum if there's a constraint present. Those are the only things that I'm asking you to remember from multivariable calculus. If you remember those things, everything else is just applications. Yup? One question actually. If I have something better there, not x squared, so I get no linear equations after that, can I repeat this method to reduce the number on the x to have linear equation later? So can I use the same method for the equation system again? No, it won't necessarily be linear. Yes, but if it's not linear, can I start from the beginning, create another new function and differentiate it again and continue that? This works with, I'm not sure I'm understanding, but this works with finite x in the start. I do this once and I get x squared, right? And can I later continue with the same method again? Yeah. To solve that system with x to reduce it once more? Yeah, you might get nonlinear equations to solve, but it works. It always works. It doesn't matter. We use the quadratic function here because when you differentiate you get something linear and that makes life easier. And I use the linear constraint just because that makes life easier and that gave three linear equations. But I didn't have to do that. If this had been, you know, cubic recording and likewise the constraint could have been nonlinear as well. It could have been quadratic, cubic. No problem. It couldn't even be transcendental. Expendental is watery. There's three functions running around in there. It always works. This general method always works. It's independent of the fact that... So you need just any differentiable function, any differentiable function. It's just going to be differentiable. Yeah. But, you know, your point is well taken because you could end up with nasty algebraic equations to solve it. But let's assume you could solve it because it still works. Other questions? This is totally informal. So interrupt me anytime you want to and if you have questions, absolutely interrupt me. How many of you vaguely remember seeing this during a multidariable calculus course? Is this all? No? Some do? Some don't? Some heads are nodding vertically other than nodding horizontally. Does it make sense? Kind of? I mean, you can see it. Okay. All right. Good. So on to the next thing is thermodynamics. So now we have a 15-minute introduction of thermodynamics. Is there any reason? So thermodynamics was invented in the 1800s and the reason was that people started building steam engines in the 1800s and they wanted to understand how they work. So thermodynamics arose from engineering. Its beginnings were in engineering, not in physics, not in chemistry, not in any of the other fields that were to use now. It was invented for very, very practical reasons and you can see that in the examples that people typically give in thermodynamics. Today we realize thermodynamics is much more general than steam engines. It's much more general than gases and liquids and cylinders and pistons. It can be applied to many, many things. You can apply thermodynamics to magnets. You can apply thermodynamics to all kinds of physical systems. But there's a few things that all of them have in common. You have some kind of system. Usually it's a material system and it can be in a whole bunch of different states. And what you're trying to understand is what state it wants to live in, what state it stably lives in, and what the relationships are between the properties of that state. All thermodynamic systems have the ability to absorb some amount of heat. You can heat them up. You put a fire on them and they'll get hotter. I can add heat to them. All thermodynamic systems have the opportunity to do work on their surroundings. If I have a steam engine, when the steam engine pushes the pistons and turns the wheel, it's doing work. Work in the sense of physics. Force times distance work on the system. And there's a few other properties that any thermodynamic system has. There's always some total internal energy in the system. It's usually all the energy contained in, you know, the heat energy or magnetic energy or potential energy, any other energy that's contained in the system. We call the internal energy u. The other property that any thermodynamic system has is temperature. It lives at some temperature T. And you can measure temperature in any units you want. You can measure temperature in degrees Celsius. You can measure temperature, but that's not very convenient because it doesn't go to zero at absolute zero. It's much better to measure temperature in degrees Kelvin. And it's even better just to measure temperature in herbs or jewels as a unit of energy. Temperature is energy. And so you might as well use that unit, and that's the unit I'm going to use throughout. Then there's optional properties that thermodynamic systems can have. They can have, if it's a gas in a container, it can be under a certain amount of pressure, P. The container can have a certain volume, V. If it's a magnet, it has a certain magnetic strength or magnetization, and all of these are optional properties of the system. Okay. So here's the first point. The total, there's two things that can happen to a thermodynamic system. I said I can add heat to it and it can do work. Now if you add heat to it, you're adding energy to it. If it does work, it's losing energy. This is conservation of energy. So the first law of thermodynamics is the change in the energy of the system, the change in the internal energy of the system is the amount of heat that you add to it minus the amount of work that it does. If you add a bunch of heat to it and then it turns a wheel, well, the amount of heat you add is energy going in. The energy that it extends to turn the wheel is energy going out. So you've got to subtract those two things. That's all there is to the first law of thermodynamics. But now there's a question. What are the independent variables in this thing? I had before you saw on the board, DZ is... I wrote this on board. In this case the independent variables are X and Y. Those are the variables you can change. The dependent variable is Z. That's the thing that changes when you muck around with the independent variables. So in this case, what are the independent variables? We don't really know yet. You might say, well, they're DQ and DW. Maybe heat and work are during the dependent variables. Unfortunately, they're not. And people in the early 1800s realized this. What they realized is, suppose, and this is the classic example of a thermodynamic system, suppose that I have a cylinder and it's filled with gas or liquid or vapor or something like that and I have a piston that I can push up and down and I can compress the gas or I can let the gas expand. And I might be able to hold this thing at a temperature T. I can put this thing under water and I can keep the water at a temperature T. And now I have two ways to cycle the system. I can push down on the gas or I can let up on the cylinder and it can expand it. Another way to cycle the system is I can change the temperature T. I can lower the temperature T or I can raise the temperature T. And so in one case I'm changing the volume V and in the other case I'm changing the temperature T. And now suppose I go around the loop. Suppose I compress the gas, lower the temperature, and then expand the gas and then raise the temperature to come right back where I started from. When I'm done the temperature is the same the volume is the same. So I've taken the system through a cycle. Air conditioners and refrigerators do this kind of thing. You compress the gas, you change the temperature, you expand the gas, you change the temperature back and you've gone through now a thermodynamic cycle. And you know what? If you do that, and you come right back to the beginning, it won't be the same as when you started. Any work that the system has done on the environment won't come back to zero. Q and W are not functions of the state of the system. Right? So DQ and DW here are not really the differentials of anything. There's no function Q that depends only on the state of the system. You can take the system through a cycle and come right back to where I started and Q will be different. It's not a function. There's no function Q that that's the differential of. Likewise with work. There's no function W that if I go around the cycle I come right back to where I started from. And that's a problem. For this reason there's many textbooks that don't like writing it that way. And they'll put a little slash into D notation like that to emphasize the fact Q is not a function. W is not a function. They don't exist. DQ is just the amount of heat that you add but it's not a function of the state of the system. That's a problem. And the first 50 years of thermal dynamics from 1800 to 1815 people, I would argue people spent that whole time trying to figure out what the right independent variables were for this equation. Questions about that? Does that kind of make sense? You don't start. So let's take one of them. We've got DQ and VW they're both problems. Neither of them are functions of the state of the system. Let's figure out how to deal with VW. But draw a picture here. Let's go back to the cylinder. Now I'm going to use my artistic abilities here and try to do a 3D picture of the cylinder. Can you kind of see what that is? I'm pushing it up and down there. And let's suppose that the area of the piston is A. Let's suppose that the pressure of the gas inside is P. And let's suppose that I move the piston a distance let me call this distance. It's small so let me call it VW. And suppose that the gas we're talking about the gas doing work on its environment. So the gas is going to expand it's going to push the piston up a little bit. Right? Work is force times distance. Oh, thank you. This is energy saving. Nice. They're not moving enough I guess. So that's a little bit too dark. Yes, a little bit. Can you still see this? What's the force on that piston? Force is pressure times 1. Area. So the force that's being exerted on the piston is pressure times area. Now what's work? This change of work. How do you express work in terms of force and distance? F times dx. So that's B, A, dx. And what's A times dx? That's the change in volume. Area times dx is the change in volume. So the work that's done is pressure times e volume. That's the thermodynamic work that's being done on the piston. Yeah? So now we can write first law like this. And what's the advantage of doing that? Remember, dw doesn't depend on the state of the system. It's not a function. It's not a real-life function. B certainly is a function. When I go through a cycle, the volume has to return back to where it was. Obviously, if I'm going back to the same, volume, we know is a real-life thermodynamic quantity. It depends on the state of the system. If I cycle the thing, the volume comes right back to where I was going. No problems there. And we've solved the problem now in the second term. We still haven't solved it in the first. Okay? By the way, internal energy is also clearly a thermodynamic state. Energy is something real. It's something you can measure. It depends only on the thermodynamic state. Two things have different energy. They're in different thermodynamic states. Two things have different volume. They're in different thermodynamic states. But he is one of those things where I can cycle the thermodynamic state and back to the same volume and temperature and keep content to change. So how do I deal with that? And now this one, well, for the first 50 years of thermodynamic from 1800 to 1850, people didn't know how to deal with that. What they did was they used this equation which we just figured out how to derive. They knew that. They knew that steam engines had to obey this equation. And then they also used some equation of state for the system. For the gas in the system. Now, in high school many of you will have learned the ideal gas equation of state. Pressure times volume is an proportional temperature. Right? You've seen this. I'm hoping at some point. There's a relationship between pressure and volume and temperature for any gas. If you squeeze the thing by lowering its volume its pressure is going to increase if you do that constant temperature. And you can write an algebraic relationship between pressure and volume and temperature. And for an ideal gas that's it. So now between 1800 and 1850 when they were making steam engines they had this equation and they had this equation. And they could put them together and they could predict things. They could predict well if the steam in the cylinder expands and if the total energy is constant then the amount of heat that you add is proportional to the over V times the change in volume and they could figure things out from that. And that was how they were doing thermodynamics until about 1850. And they still didn't know what the other independent variable was. And then finally in 1850 Rudolf Clausius figured it out. And what he figured out and I'm not going to tell you the detailed reason for this if you want to know it but I will tell you what the result is and the result is easy to understand even if you don't understand everything that goes into it. What he noticed what Clausius noticed is that even though DQ when you measure all the little changes DQ going around the cycle coming back to where you started even though those don't add up to zero so that you have net heat gain or loss in the system even though the little DQs don't add up to zero if you weighted by one over the temperature it will add up to zero. So if I take DQ over the temperature as I go through the cycle and I add all those things up I will come back to where I started and that paper was published in an 1850 by Clausius. And what Clausius concluded from that is that even though DQ is not the differential of something DQ over T is the differential of something. And he called that quantity s and he gave it the name the entropy. And entropy s is a real function of the thermodynamic state of the system. If I go through a cycle the entropy will come right back to where it started. And that was a key observation in thermodynamics. Questions? I have a strange question was this experimentally discovered by him? Or was this actually mathematically? It was more mathematically. So if you take a course in thermodynamics one of the things that you'll be forced to study early in the course is the idea of a Carnot cycle and that's where this idea came from. But even that could have been experimentally conducted. I think it was motivated by experimentally the steam engines that you think about them are constantly going through cycles. All the steam engine does cycle all day long. And so they had to understand the behavior of thermodynamic systems that cycle. And so I think it was motivated by experiment and then people like Carnot and people like Clausius scratched their head and thought about it a lot and then they came up with that key observation. So I'm eliminating the log here but that's the point. Even though dq isn't a differential ds is and s is called the entropy. And that means that now if ds is dq over t that means dq is tds. Right? So just like we put dw is minus pdv we can now put dq is tds. And the first law becomes this. Questions? The independent variables are the independent variables are entropy and volume. Energy, internal energy depends on entropy and volume. And now if you keep that multivariable calculus theorem in the back of your mind you can easily read off that the rate of change of energy with respect to entropy is temperature and the rate of change of energy with respect to volume is minus the pressure. And now everything is a thermodynamic variable. This was in the state of thermodynamics in 1850 and it's been aside from details it's pretty much unchanged since. Now Clausius figured out the right independent variables to measure energy in terms of for entropy and volume. And you could even figure out what the partial derivatives are with respect to those quantities. And those partial derivatives are very useful. One is temperature, the other is pressure. They're two of the most fundamental things you can measure. Questions about that? Now let's suppose that so entropy and temperature sorry, entropy and volume are now considered the natural independent variables in which you can measure energy. If you know the energy in terms of entropy and volume you can take partial derivatives to get temperature and pressure. Of course if you know u is a function of s and v when I take those partial derivatives I'm going to get temperature as a function of s and v and I'm going to get pressure as a function of s and v. But now what I can do is once I have temperature as a function of s and v and once I have pressure as a function of s and v what am I going to get? If I eliminate s between those two I'm going to get pressure as a function of t and v and what's that? Equation of state. So in other words from this approach I can derive the equation of state. I don't have to give it separately as a separate bit of information. If I know the energy as a function of its natural variables s and v that's as good as knowing the equation of state and more. Everything you can possibly know about the system thermodynamically if I know energy as a function of s and v. I can actually derive the equation of state from that I can now start thinking about changes in energy, changes in temperature, changes in volume, all of that I can figure out exactly what my steam engine is going to do I can figure out what the efficiency of it is going to be all of that. I just need to know u in terms of s and v and by the way the notion that there is a function entropy that is a function of the thermodynamic state of the system is beginning of what's called the second law of thermodynamics. It's just the first half of the second law of thermodynamics. So now you've seen what that is. There's one other bit to the second law of thermodynamics and that is and Clausius also noticed this if a system is changing in some slow reversible way without any friction without any loss the entropy will be unchanged but if you change the system rapidly in a reversible way it gives rise to friction the entropy will change but what Clausius also noticed is it can only increase entropy can never decrease and that's the second part of the second law of thermodynamics so in other words that quantity ds is always greater than equal to zero. This entropy only increases it can only go up its differential is only positive and that means that equilibrium happen when entropy is not survived. So now we've had one big problem we didn't know what the independent variables are it took people 50 years to solve that problem with thermodynamics we understand now that internal energy its natural variables are entropy and volume are there any remaining problems? Yeah there's a big one how do you measure entropy? I know how to measure temperature I get a thermometer I know how to measure pressure I get a pressure gauge and I stick it in the tire I know how to measure volume I can take a ruler and measure the size of the containment figure out what the volume is it's not easy to measure because it turns out you can but it's not easy to do you really don't want to have entropy being one of your independent variables and it's kind of unfortunate that it turned out to be one of your independent variables it would be nice if we could get rid of it and make a new independent variable is there a way to do that? Yeah there is there's a neat way to do that and the way to do it is to find a new energy a new kind of energy instead of calling it u we're going to call it a it's called the Helmholtz free energy and it's equal to u minus the temperature times the entropy and my claim is that that's a function of temperature and volume too easy things to measure it's no longer a function of entropy now you might say how do you know that's not a function of entropy I see entropy on the right hand side of this equation how do you know that a the way that I've constructed it that is not a function of entropy I'll show you how I can differentiate a with respect to s if I do that I get du ds minus t if I differentiate this with respect to s I get t so I get du ds minus t but remember that du ds is t so that's t minus t or zero in other words the derivative of a I have to entropy is zero that means it doesn't depend on entropy so I constructed a new quantity that doesn't depend on entropy this is good I don't know how to measure entropy so a is a different kind of energy than the one we started with it's not the total internal energy it's internal energy minus ts and it's called the free energy or the Helmholtz free energy now what is this maneuver subtracting ts from a variable to create a new variable that has a new variable that has different independent variables it's called the Legendre transform energy here's one way to understand why it works if I write u minus ts let me factor out t in fact I'm going to factor out a minus t so then I get s minus 1 over temperature times u from here on in whenever you see beta it means 1 over the temperature turns out the inverse of the temperature is more useful than the temperature itself so we give it a special name we call it beta so I'm writing a is negative t s minus beta u now remember s in equilibrium we maximize we get an equilibrium when s is big okay now what we're doing is taking s minus a constant times u does that remind you of anything does that remind you of anything I showed you from multivariable calculus beta looks an awful lot like a Lagrange multiply by maximizing s at constant internal energy I would introduce the Lagrange multiplier beta and maximize this quantity instead the only thing is if I stick a negative sign in front of it I've got to minimize that quantity so minimizing the free energy is the same thing as maximizing the entropy at constant energy that's a really really good thing to do because energy is conserved in systems if energy is conserved then you want to keep that constant when you maximize entropy you don't want to maximize entropy and let the energy fly all over the place the energy can't fly all over the place it's conserved you want to minimize or rather maximize entropy at constant energy and you would do that with a Lagrange multiplier and what you're seeing is that the free energy does exactly that if I minimize the free energy I've got my maximum entropy state at a constant energy and that's why this is so useful so that is basically my 20-minute introduction to thermodynamics and the result of it is that you really want to minimize the free energy of the system the free energy of the system is the energy of the system minus temperature times entropy entropy emerges as a natural variable and by the way that you can see that v and t are the natural variables independent variables to understand the free energy I'm defining a to be u minus Tx so what's the differential of a? well it's the differential of u minus and now I do the derivative of the product thing it's the first times the differential of the second plus the second times the first so I get this but remember that du is Tds minus pdb and now the Tds is cancelled that's why entropy goes away as an independent variable and I just end up with minus pdb minus spt what does that mean? well once again you use that theorem over there for multivariable calculus and you can see the derivative of a with respect to volume is the negative of the pressure once again so this means the natural variables for the free energy a, a temperature and volume if I knew a in terms of temperature and volume if I knew what that function was from this I can compute the entropy as a function of temperature and volume from this I can produce the pressure as a function of temperature and volume that second thing right there is the equation of state pressure is a function of temperature and volume once again if I knew a as a function of Tdb I can get the equation of state and I can get much more okay that's it for thermodynamics questions at this point because now I want to go to the microstopic this was everything that people understood about thermodynamic systems up until about 1860 we have captured here this is I know that's a lot to digest in a few minutes but you got the gist and I'm hoping that I'm putting the equations up there but I'm talking a lot too I'm hoping that the talking you don't see any equations I'm hoping you hear from my words does it make sense? practically speaking what is the minimization of free energy like in a real system how would you characterize that how would you describe it suppose that I have a system and I have some way to capture its energy just as a function of temperature the notion is that if I can now you don't always have writing a real system if I change the temperature I'm going to kind of change the energy likewise the volume and I'm going to produce heat as I do that if I have some way of it's very difficult to do it physically for the energy but if I have some way of measuring energy as a function of temperature and the notion is that's the thing you should minimize in order to figure out what each librarian is here's one way to understand it and this is for irreversible process it's an irreversible process something with friction that will take you to that minimum of free energy why isn't the energy minimum described in equilibrium suppose I have a little value here and I put a ball in there and there's friction what's going to happen the ball is going to roll down to the bottom of the ball in other words the ball is seeking the minimum of potential energy energy minimum tend to be equilibrium but in the case of the thermo mechanical system it's easy to see in the case of the thermo-dynamic system it's really difficult to get any solution for it but the free energy is to the thermo-dynamic system what the potential energy is to the ball and the value the thermo-dynamic system tends to roll down to a minimum of free energy the same way the ball rolls down to a minimum of potential energy that's one possible intuition if you could quite many you described very easily let's say we have the minimum temperature here all different energy flows are canceling them out so there is nothing to do all of a sudden bring a cup of tea and naturally what cup of tea will give its energy to the surrounding and the total energy will increase but the energy of the cup will decrease it's one situation it will bring a cold cup into the room its temperature it will take energy from the surroundings and the total energy of the system will go so that there is no more any exchange if there is no more exchange that's the the coffee cup being at the same temperature as the room is like the ball in the coffee cup it's where it's happiest and the thermo-dynamic system will eventually approach other questions so what happened in 1860 to screw all of this up the atomic theory came along and people stopped believing that gases and liquids were all smooth and continuous down to microscopic levels they began to understand that there were atoms the idea of molecules didn't quite come along yet originally in the beginning people knew there were particles down there somewhere and if you keep chopping things down fun enough you're going to get to the point where there are individual particles moving around banging into one another they're just too small for us to see but we know they're there and they began to understand this around the end of the 19th century and how might you understand thermodynamics in this picture well the internal energy is then the sum of all the kinetic and potential energies of these molecules or atoms banging around with one another you add up all the kinetic energy you add up all the potential energy and that's gotta be your internal energy there's nothing else down that's all the energy there is so the internal energy these atomists wanted to say is the sum of all the kinetic and potential energies of the atoms in the system what's the temperature well by this way of thinking the temperature is the average kinetic energy of an atom what's the pressure well if I have a wall of the container the atoms are banging into it and so they are exerting a certain amount of force per unit area on average of the wall the atoms are constantly hitting the wall and in a time average sense they are exerting a certain force per unit area on the walls that's the pressure so pressure is atoms banging into walls temperature is average kinetic energy of an atom total internal energy is the sum of all the kinetic and potential energies of everything and somehow this microscopic theory had to be reconciled with the macroscopic theory now there are some fields in which this reconciliation has been achieved there are other fields in which this reconciliation has not been achieved I would argue that economics is a field where we have microeconomics we have macroeconomics and we still don't know after 50 or 100 years how one is derived from the other in physics we were fortunate they figured it out in the 1960s, 1870s it was realized how you could derive thermodynamic properties just from the idea of molecules hitting one another and exerting a force on one another and that's what I'll show you next and this was done by Boltzmann working in Germany it was Gibbs working in the United States and Maxwell working in Scotland 1860s, 1870s so now we have to get a new framework instead of thermodynamic states we're going to call thermodynamics as our macroscopic theory and it has macroscopic states if we go down to atoms we have many many many more states the atoms individually have velocities, they have positions and in order to reproduce the microscopic state of the system we would have to put every atom in exactly the same place to get exactly the same microscopic state there are zinns, infinities of microscopic states of the system many many more than thermodynamic states and what that means is that many times different microscopic states correspond to the same macroscopic state but if I have two states that differ just because at one moment one atom is in the same place but headed in a different direction that's going to correspond to the same macroscopic state these many many microscopic states corresponding to the same macroscopic state in a much more fine grade thermodynamic states of course grain, microscopic states of course grain now let's pretend for a moment that these states are discrete this microscopic state number one microscopic state number two microscopic state number three let's pretend that we can count them in reality in classical mechanics you can't, there's a continuum of states at the microscopic level ironically in quantum mechanics you can count them right you get discrete energy states quantum mechanically that states can be in quantum statistical mechanics is arguably easier in classical mechanics for this reason you can sum over discrete states in quantum mechanics you have to integrate open and classical mechanics it makes life easier to use discrete states so let's use discrete states you don't need to know any quantum mechanics just assume that the states are countable you can have one state another state and another state and what properties do these states have every microscopic state has a certain energy associated with it epsilon j to denote the energy of that state that's the only problem that we're going to attribute to the microscopic states now that epsilon j may depend on the volume of the container it may depend on a lot of other things but that's all we're going to listen here is one way of understanding what Gibbs and Boltzmann came up with they said that the probability of a microscopic state is just proportional to e to the negative 1 over temperature remember beta is 1 over temperature e to the negative beta times the energy of that state 1 over z is just a proportionality constant okay so the probability of state j is just the exponential of minus the energy of the state now for the moment let's regard that as a postulate let's regard that as like an axiom we're going to take that to be true and let's see what happens as a result of that and then I'll show you how that doesn't need to be the most fundamental postulate of how that could be derived okay first of all let's figure out what the proportionality constant is okay 1 over z probability constant well there's one thing that we know about probabilities and that is they all have to sum up to 1 but if the system has probability pj of being in state j if I sum over all of the states it's gotta be in one of those states right so if I sum over those I better get 1 right and that means I can pull out the 1 over z and that means that z is equal to this z is just the sum over all states of e to the minus 1 over temperature times the energy of the state and that quantity turns out to be very important and we call it the partition function of the system we'll see why it's important right now it just looks like a normalization constant you'll see why it's so important in a moment um notice that it's a function of temperature because beta is 1 over temperature it's a function of volume because these energies may depend on the volume of the system ok but we'll just call it z for now and right now it just looks like a proportionality constant here ok so again this is a posture Boltzmann and Gibbs and Maxwell tell us it's true so we're assuming it's true and now we know the probability of the microscopic system being in any particular state if that's the case how do you compute a macroscopic quantity well suppose you want the energy all you do is take the probability that the system is in state k and multiply it by the energy of state k and some of them are all k you're taking an expectation value a probabilistic weighting of all of the states right if the system has probability p0 of being in state 0 and the energy state 0 is e0 then we multiply those together I just take a probabilistic weighting of all those energies and that will give me the macroscopic internal energy and we denote that by the expectation value we use angle brackets for the macroscopic so if I know the if I know the energies of the microscopic states this gives me a way of figuring out what u is that's the thermodynamic u that's the internal energy of the system taking a probabilistic weighting the probability of each state down to n if you let's go now already we can see an interesting thing that we can do again if that's true then here's another way you can understand that you can take a look at that somewhere notice that if I take this quantity here which is the partition function z right and I take the derivative of it with respect to beta that derivative with respect to beta is going to bring down a minus eK it's going to give me exactly this except with a minus sign can you see that if I take the derivative of e to the minus beta eK with respect to beta I pull down a minus eK so this sum here with the eK is just the derivative of this thing when you recognize the following we derived that the internal energy is the negative beta derivative of the logarithmic partition function and we did that from microscopic considerations we assumed Boltzmann's restriction for the probabilities to get that but that's a neat result but now from thermodynamics remember the internal energy is the Helmholtz free energy plus T s remember Helmholtz free energy was energy minus T s on that side and now I'm going to play the following game remember that s is the negative derivative of a with respect to temperature we derived that a few slides ago but s was if I take the derivative of the Helmholtz free energy with respect to temperature I get the negative boundary so I just use that and temperature times derivative with respect to temperature is the derivative of beta times the derivative with respect to beta that's because beta is 1 over 3 so I can write it like that and that's the derivative of a product it's the derivative if I take the derivative of a product it's the derivative of the first times the second plus the second times the derivative of the first and now look what I've done from thermodynamics I've shown that internal energy is the beta derivative of this and from the microscopic considerations I've shown that u is the derivative of negative log z and that gives us the association that beta a is just the negative of log z or in other words a is minus T times the load of z and that is one of the most important equations in statistical physics and these folks realize this around 1870s or thereabouts what that means and this is kind of amazing is that if I know all the microscopic states of the system I can form this function c right and I'm doing this from purely microscopic considerations and now I take its logarithm I multiply it by the negative of the temperature and what do I get? I get the Helmholtz free energy as a function of temperature and volume and from that I've already told you you can compute anything you want you can get the equation of state of the system you can get any thermodynamic property of the system questions about that that is the big connection between microscopic and macroscopic that's the thing that you can you can start playing wings that go on on the atomic scale and from it you can derive something like an equation of state and 100 years of statistical physics has been spent playing with this idea and it's had great successes over the course of the last 150 years now how am I doing on time? it's 3.30 can I continue a bit more? yeah just give you one example and if you don't understand this example don't worry I'm really out to give you just the gist of this theory the genealogy so that you understand the intellectual tradition of how thermodynamics day rise your atomic theory to statistical physics and statistical physics can describe everything that thermodynamics can do and even more all of that is important to understand and let me give you one example and the example is very simple it's the ideal gas suppose I have a bunch of atoms in a container and there's no potential energy between them all of the energy is connected one half mv squared is the kinetic energy of a particle people remember that in physics so I've got a box I've got n particles in the box n atoms in the box and the particles have mass m little atoms and the box has volume v and we're keeping the box at the temperature T what's the energy inside the box we're saying there's no potential energy these things somehow thermalize but they don't need each other at all and how many how do you describe the microscopic state of the system what do you need to describe the microscopic state what do you need all the positions and all the velocity components of all the molecules each particle has how many position coordinates in three dimensions three each particle has an x, y, and z coordinate and each particle also has three components of velocity right so for every atom in the system I've got to give you three positions I've got to give you three components of velocity so that's six numbers for atom I've got to give you in order to describe what's going on if there's n atoms that means I've got to give you six n numbers to describe the state of the system alright now in terms of those what is the energy well the energy is one half m v squared for each molecule and sorry it shouldn't be a j there that should just be moved it's one half m v squared for each molecule and that is one half m times the sum of the squares of the three components for the first atom plus the sum of the squares of the three components for the second atom plus the sum of the squares of the three components for the third atom there's going to be three n variables there added up and squared we're assuming all the atoms have the same mass m right so the energy of the system is m over two times the sum of three n quantities all of them squared and those three n quantities are the velocity components of the atoms what's the partition function then well it's e to the minus beta times the energy summed over all the states of the system but now they're not discrete states they're continuous physical mechanics so I've got to integrate over all velocities three n of them there's three n integrals there and then I've got to integrate over all the positions and there's three n integrals there as well well the position integrals are easy the position integrals integrand doesn't depend on position so if I integrate three of these coordinates x, y, and z I'm going to get the volume of oxygen and there's going to be n of those volume to the n there so the position integrals give me volume to the nth power that's the volume raised to the power of the number of atoms in the box this is a strange function but let's keep going how about the velocity integrals well there's three n of them because that exponential e to the minus v1 squared minus v2 squared minus v3 squared can be split up the exponential of the sum which is the product of the exponentials so I'm just getting three n different exponentials each one of which looks like this and now you can go running to your table of integrals and you can figure out that this integral inside there is just the square root of 2pi divided by the mass of atom divided by data in the inverse temperature and the square root of something raised to the 3n power can be expressed as 1 over 2 because the square root is half one way and 1 over data is just the temperature so I'm going to put your numerator instead of data in the denominator and look what I've done I calculated the partition function wasn't all that difficult and now according to our theorem the free energy is minus the temperature times the logarithm of this but when I take the logarithm of this this thing is made to take the logarithm it's all powers here because you take the logarithm of a power the power just turns into something it multiplies if I take the logarithm of b to the n I've got n log b if I take the logarithm of this piece I get 3n over 2 times the logarithm of what's inside the logarithm of what's inside includes the logarithm of t plus the logarithm of a bunch of constants so this is the I'm going to differentiate this so I don't care what the constants are so now I have the free energy just for microscopic considerations and remember that the pressure is the negative of the derivative of the free energy with respect to volume with the derivative of that with respect to volume and I get that minus the pressure is minus n t over v or pressure times volume is proportional to temperature and we just derived the equation of state of the ideal gas from assuming that it was a little molecule part of the ideal part of this statement that there is no potential energy yes, is that the only is there anything else? we're also assuming that the atoms have zero volume that they don't exclude any of the volume available for the systems so the maximum allowness allows you to be added those are pretty much the only two things that go into the work of the ideal now there's a way to fix up this argument to include potentials between the atoms but it's much harder to demonstrate and it's called the violation of I'm sorry? yeah, you could put in both attractive and repulsive between atoms usually what happens between atoms is they attract one another for a while but then if they get too close they were done and you can include that in there and the study of how to include that in this calculation can be found in statistical physics books under the name of Meyer cluster expansionism so you can do it but it's hard to do this calculation is relatively simple but it should be striking to you that we started with nothing with the notion of particles banging around and then again if you want to have them swear and out came the equation of state of the ideal gas that's an amazing connection between the microscopic and the microscopic do you have a control over the microscopic level to specify the direction of the atoms they can now manipulate experimentally individual atoms it's very hard to do I think but it can be done they can keep individual atoms and manipulate them with lasers and that kind of thing all of this theory assumes that you don't know the detailed microscopic velocities of the particles all you know is the probability that they'll all have this particular set of velocities and everything as you noticed everything in this theory is in terms of probabilities so this is assuming that you don't know if you knew all of the microscopic positions and velocities of the particles in principle you could use the laws and mechanics to follow each one of them individually notice that this method did not require you to do that so it's statistically important one last thing about the most statistics and then I can just tell you a few words from my hint at the rest of it is there a way to express entropy in terms of this microscopic theories there is Boltzmann and Gibbs have told us that the probability of a microstate is this and that means if I take the logarithm of the probability of a microstate I get that but now from thermodynamics which we've now connected to we know that the entropy is the abt and that's equal to this just from a being minus e times the logarithm of z and now that pk pk sum can be written in terms thanks to this that the abk that's sitting in there p minus log z and remember that the sum of the probabilities is one so this first term is minus log z it cancels that thing in your left with this so the entropy of the system is just the negative of the sum of the probability times the logarithm of the probability how many of you have seen this before in a different context you get p log p summed over all the states of the system does it remind you of anything coding theory maybe Shannon entropy good very good it should strike you as kind of amazing the notion of Shannon entropy which is completely there for information theory pops up in the entropy of a thermodynamic system it's the same p log p and all of the properties of Shannon entropy that you learn in CIS for example are true for the entropy it's amazing that you know this idea that Clousy has started in 1850 of finding this new thermodynamic variable combined with atomic theory leads to something like the Shannon entropy for atoms that should strike you as kind of amazing and again this is I think one of the great successes of statistical physics is one thing that should bother you a little bit more and that is where on earth did e to the minus beta times the energy come from you just assumed that at the very beginning I told you Boltzmann gives proclaimed e to the minus beta times the energy it led us to some nice results where did it come from so one way that you can understand where it came from is to say that that's not the fundamental thing the fundamental axiom that we should have taken is not e to the minus beta times the energy the fundamental axiom we should have taken is that the entropy is this by the way this formula for the entropy or something like it is carved on Boltzmann's tombstone you can go to Vienna, Austria and on his grave is this formula something like that not exactly I haven't made this program yet but one of these days if I'm ever in Vienna I want to see this you could have started with this you could have said this is our fundamental postulate entropy is equal to this and once you know this if I maximize the entropy at constant energy I can derive the e to the minus beta t in other words the Boltzmann gives probability distribution is derived by maximizing this entropy at constant energy that's another way that you can axiomatize this theory but now you might say well where does p log p come from what is so fundamental about p log p why should I believe that the entropy is p log p well there's an important quality of entropy that you want to preserve and p log p preserves it and that is the following suppose that I have two systems suppose I have two containers of stuff a and b and they have nothing whatsoever to do with one another and maybe this is one container of gas and another container of gas and the probabilities of this will call p a sub i and the probabilities of this system will call p b sub j now the number of microstates might even be different in the two cases but it doesn't matter this system can be in a bunch of states and that's not an exponent by the way that's just a label okay so alright now I've got these two states suppose I say that I don't want to consider them separately I want to consider them together that's one signal for a more dynamic state well if this had n a microstates and this had n b microstates then the total has n a times n b microstates commonly because this one could be in any one of n a states and this could be in any one of n b states alright so there's a total of n a times n b states and so the probability of the whole system such that the first the a part of it is in state i and the b part is in state j these are independent systems so the joint probability of those two now let's see what that gives us about the entropy the joint entropy the entropy of the whole system is the sum over all the states of both systems together times that p log p for the whole thing but if p a b is the product p a and p b then the log of the product is the sum of the logs and remembering that the sum of each one individually has got to be one p a over i I get out by some p d over j I get one then the rest of this goes away and I get this and this separately and that means that the entropy of the whole system is just the sum of the products the entropy of the whole is just the sum of the entrepies of the two systems and that means that the entropy is an extensive quantity if I take a system with entropy s a and a system with entropy s b and I stick them together the total entropy is s a plus s b that's kind of a good thing what is shanton entropy shanton entropy or the shanton entropy this is an entropy for information theory which let me go into that let me just finish this quickly and describe what shanton entropy is shanton entropy the idea that the entropy is extensive is important and the only way that that worked that this little argument worked is because the log of the product is the sum of the logs that's what makes the entropy extensive and that's why the expression for entropy involves logarithm so now you could take as your fundamental postulate the entropy is extensive if you have the entropy of the whole is the sum of the entropy of the plaques take that as your postulate from that you can derive the Boltzmann Gibbs form for the entropy this thing from that you can maximize that at constant energy and get the Boltzmann Gibbs probability distribution and everything else follows from others so now shortcomings of this theory it works brilliantly for computing thermodynamic properties of solids, lasers and gases we did it for the ideal gas but it's used by Feynman for example to compute the specific heat of helium of liquid helium is arguably one of the greatest achievements of 20th century physics it's one of the best things that you could compute using a pencil and paper in the 20th century it's a fabulous paper there's nothing more than what I've showed you so far the trouble is when it works it works brilliantly but when it fails it fails miserably and there are systems that we know this whole program doesn't work for any systems that have long range interactions this doesn't work well for gravity holding together a cluster of stars this doesn't work at all a plasma where you have electrons with long range Coulomb interactions between them this doesn't work well at all people can't use Boltzmann Gibbs statistical physics to compute the statistical properties of these things if they have long range interactions long time memory long range correlations power law tails all of these things seem to cause Boltzmann Gibbs theory to fail and nobody knows why many many things it works wonderfully but there is a whole class of problems for which it doesn't work what did Salas do and I'm going to let him tell you most of what he did but here's the idea you all know the story of geometry in the 19th century everybody up through the 20th century was forced to learn Euclid's postulates of geometry everybody had to read Euclid's elements to do geometric proofs people believed that if anything was absolutely fixed it was geometry okay and it was obvious that straight lines were straight it was obvious parallel lines never intersected all of these things were considered obvious and then in the 19th century it was shown that they weren't so obvious after all that there were perfectly consistent geometries that you could get in which some of these things weren't true there were perfectly consistent geometries you could get where there were no parallel lines no lines parallel to a given one there were perfectly consistent geometries that you could have where there were an infinite number of lines parallel to a given one there were perfectly consistent geometries that you could derive in which the sum of the angles of the triangle did not add up to 180 degrees it was possible to have one geometry but many many many different geometries and in order to figure out what the other ones were all you had to do was negate one postulate and that was Euclid's parallel postulate right if that wasn't true then many other possible geometries opened up the parallel postulate looks a little bit out of place in Euclid's elements it it says that if I have one line external to it there's only one line through that point parallel to a given line all the other postulates are much simpler than that I'm assuming you all studied high school geometry and counterfeit by denying one postulate of geometry people like Lovachetski and Gauss and others opened up the possibility that there were many many different geometries then came Einstein and Hilbert at the beginning of the 20th century who showed that you know what some of those other geometries are actually useful and major Solis tries to do the same thing in statistical physics what's the postulate he negates it's the extensivity of the entropy the postulate he negates is that the entropy of the sum is the sum of the entropy the whole system is the sum of the parts if that's not true there's nothing forcing there to be a difference in the expression of the entropy if that's not true you're going to get something different from the Boltzmann Gibbs probability distribution and if the Boltzmann Gibbs probability distribution isn't true you're going to get different results for everything the way that he did it is so mathematically elegant that it's worth just considering it for one minute instead of the exponential form of them he defines the Q exponential called the E sub Q to the X and it's defined in this way now if I take the limit of this expression as Q goes to one and you dust off your calculus books to remember how to take limits you'll find that in the limit as Q goes to one the Q exponential of X is just P to the X we're also going to define the Q longer in this way and once again in the calculus books you find that the limit as Q goes to one of this quantity is just the ordinary logarithm moreover these two quantities like the real exponential and the logarithm are inverse functions of one another the Q exponential of the Q logarithm gives you the same number back and the Q logarithm of the Q exponential gives you the same number back in inverse functions now how do you do these limits and once again get out your calculus books remember how to do Loki-Fall's rule because you can see this is an indeterminate form as Q goes to one this is U to the zero minus one which is one minus one which goes to zero and down to the denominator it's also going to zero so this is a zero over zero logarithm and you can do that and find out that it comes to log U the Q exponential and the Q logarithm are one parameter deformations of the real exponential and the real logarithm but Salis-Enderby is then defined to be the sum of p log one over p except it's the Q logarithm now if that were the real logarithm the log of one over p would be negative log p and you'd be right back to the Boltzmann theory so when Q goes to one you're back to the Boltzmann theory so this is Salis-Enderby this is what he wants to use he has one other possibility he says when you compute the internal energy instead of just weighting the internal energy by the microscopic energy by the probabilities weight them by the probability to the Q power and when you do that you'd better normalize it to the sum of the probabilities he calls this Q expectation value of the energy use that expectation value to get the internal energy and use the Salis-Enderby instead of the Boltzmann-Gibbs-Enderby the whole theory reduces the Boltzmann-Gibbs when Q is equal to one so Boltzmann-Gibbs is a subset of the Salis-Enderby it's one case but what Salis claims is that sometimes this theory works better when Q isn't one by the way from that entropy and holding this constant you can figure out the microscopic probabilities they are Q exponentials instead of exponentials of minus beta times the internal energy and the constant the normalization constant is one over the Q partition function which is defined the same way with the Q exponentials the amazing thing is that thermodynamic properties like this one which we derived earlier still work they don't care you can derive upwards all of thermodynamics and it all still works the only things that don't work are additivity in the entropy the Q entropy of the whole is not just the sum of the parts plus 1 minus Q times the quantity and again notice when Q is one it works it's additivity is extensive when Q is not one the entropy still adds depending on whether Q is bigger than one or less than one it can be super additive or so much what does he apply this to he's applied it to fluid turbulence which is not well described by statistical physics he's applied it to gravity globular clusters of stars he's applied it to particle physics he's even applied it to the motion of little hydra of one celled creatures running around in water he's applied it to things that you wouldn't usually apply statistical physics to the frequency of words and linguistics pure electron plasma has low dimensional dynamical systems of much too low dimension you might not think of applying statistical physics to it because it's so low dimensional but there are problems what he's done is to define a whole set of theories of statistical physics not just one is there any a priori way to know what Q is? for many systems we know Q equals one works in the Boltzmann Gibbs theory is Q equals one if you give me a system that does work is there any way that I can know what Q is before I go into this so far no can Q be related to other properties of the system and why does this particular deformation of exponentials and logarithms work why this particular deformation of the entropy with Q why does this work and others don't nobody knows the answers to these questions but the comparison with experiment is striking it often works so again just as there's many different kinds of geometries what Salis claims is that there's many different kinds of thermal statistics some of these seem to be able to describe certain phenomena better than Boltzmann Gibbs and it could be now make sure of a revolution of statistical physics but there's all kinds of questions that remain to be answered I hope that that was helpful in giving you some of the background including the mathematical background that he's doing so that when he talks tomorrow or I guess the next day it's a variety you know I think you'll go into it really understanding what he's doing he's trying to change all of statistical physics in such a way that it accommodates complex systems with long tails, long range correlations, long range forces that's what he's out to do there's 150 years of really good results based on Boltzmann Gibbs but we know it doesn't work for everything and it's a big challenge to figure out what we can use for next I went on a long time I did a little before good so thanks for your attention I hope that was helpful in your understanding he's a very very famous guy very well known throughout the world there is some controversy associated with him and his theories not everybody believes this information on statistical physics but yeah for certain models these are theoretical and I can explain where this comes from but it requires a longer explanation with modular clusters of stars and the arrival of the radius of the orbital stars in a modular cluster a few equals four sevenths or so four sevenths and three sevenths for certain theoretical reasons that's why it even works that's one of the few cases where you can start with the microscopic dynamics and figure out where it's at most of it is just theoretical most of it is it takes experimental results and you try to figure it out I have a feeling that within 40 hours maybe more we can continue this maybe off-log or informally I think thanks