 We've looked at adiabatic expansion and compression. These isentropic processes macroscopically reverse each other without requiring microscopic reversal. The gas begins in, and returns to, the macroscopic state of n-atoms at temperature T in a volume V. But the individual atoms do not return to their original positions and velocities. Now let's consider another adiabatic process in which gas increases its volume. Instead of moving the center wall, we can simply remove it, allowing the gas to freely expand into the greater volume. As it does so, it never pushes on a moving surface, therefore it has no opportunity to do work, and its internal energy remains constant. So in the free expansion of an ideal gas, the gas does no work, and its internal energy, and hence its temperature, do not change, unlike the case of normal adiabatic expansion. Since the temperature remains constant, the ratio of temperatures in our entropy expression can be taken to be one, and we are left with entropy as a function of volume alone. Using the free expansion, the volume increased, so the entropy increased, indicating an irreversible process. In particular, the gas will not spontaneously reduce its volume, thereby lowering its entropy. Of course, we could mechanically compress the gas, and then remove heat equal to the work done. This would return the gas to its original state, but it would change the state of the rest of the universe. To be reversible, remember, we have to be able to return the entire universe to its original state. To gain more insight into our simplified entropy expression, let's drop the S0 term, at least for now. Note that V over V0, a ratio of volumes, is a dimensionless quantity, a pure number. Let's denote this as M, and think of it as a number of elementary cells. That is, we think of our entire volume V as divided into M cells, each with volume V0. We can then specify an atom's position to within the dimensions of an elementary cell, by stating which cell it is located in. The smaller V0, the more accurate is the specification of the atom's position. With this definition, we can write the entropy as S equals KN natural log M. Using the power property of logarithms, this becomes K times natural log M to the nth power. Let's denote M to the nth power by capital gamma. The significance of this quantity is that it is the number of ways to distribute N atoms among the M cells. For N equal 1, there are M ways to distribute 1 atom. It can go on any of the M cells. For N equal 2, there are M squared arrangements. For any of the M cells the first atom can be in, the second atom can be in any of the M cells. An M times M is M squared. Extending this idea to N atoms, the number of arrangements is M to the nth power. So, at least in this special case, entropy S equals Boltzmann's constant K times the natural log of gamma. Which leads us to wonder if, maybe in general, the entropy of a gas is Boltzmann's constant times the logarithm of the number of ways the molecules can be arranged within the volume. This is known as Boltzmann's principle. It was one of the great insights of Ludwig Boltzmann. So much so that, as shown here, it's etched on his grave marker, just above his rather imposing bust. Boltzmann used W for the number of arrangements. We've been using W for work, so we use the alternate symbol gamma. This gives us a new way to think about the entropy change in free expansion, and why a process like this doesn't spontaneously reverse, even though that is possible in theory. Let's set M to the n equal to gamma 1. This is the number of ways to arrange the n atoms among the M cells in our box. Then, we allow the gas to expand into twice the volume. With twice the volume, there will be twice the number of cells, 2M. So the number of ways to arrange the n atoms in this doubled volume is gamma 2 equals 2M to the nth power, which is 2 to the n times gamma 1. Now we come to the new way of thinking. In video 2 of this series, we saw that a dynamic system of elastic balls is chaotic. Any uncertainty in its initial state grows exponentially with the number of collisions. So it is practically impossible to predict the microscopic physics of such a system. Let's turn that problem to our advantage by treating the system as so unpredictable as to effectively be a random process. That is, we think of the dynamics as essentially randomly shuffling the atoms' positions after each short interval of time. If we assume each of the possible gamma 2 atomic arrangements is equally likely, then each has a probability of occurrence equal to 1 over gamma 2. If at any time we look at the atomic arrangements in the large volume, what is the probability that all atoms, or more generally molecules, will be in the left half of the box? This is simply the fraction of the gamma 2 arrangements that have all atoms in the left half. But the number of such arrangements is just gamma 1, the number of arrangements in the original smaller box. So the probability is gamma 1 over gamma 2, which equals one half to the nth power. For large n, this is extremely small. So the question, why can't a gas sample spontaneously compress into half its original volume has the answer, it can, but it is extremely unlikely. So unlikely has to effectively be impossible. Therefore, it is possible that if we keep watching our system, we will eventually see all the particles end up on the left side of the box. The gas will have spontaneously compressed. But with 64 particles, the probability of this occurring at a given time is 1 over 2 to the 64th power. This is about 1 over 18 times 10 to the 18th power, 1 over 18 million million million. The time since the Big Bang, the age of the universe, is about 0.44 times 10 to the 18th seconds. So even if we had been looking at our system once a second since the beginning of the universe, it is unlikely we would have ever seen this occur. Now imagine the time scale for spontaneous compression in the case of a gas with trillions of atoms. From this point of view, the second law of thermodynamics does not tell us what is impossible, only what is very very improbable. The physics of free expansion brings up an important point. We started as state 1 with temperature T, pressure P1, and volume V1. We end up at a state 2 with the same temperature, lower pressure P2, and greater volume V2. In isothermal expansion, we move between these states continuously along the isothermal curve in a manner such that at any time, the system is effectively in equilibrium. By contrast, in free expansion, we start at state 1. Then, with the removal of the center wall, the system enters a non-equilibrium condition for which pressure and volume did not have well-defined values. So the system does not correspond to any point on our PV diagram. A rigorous description of the gas evolution would require the techniques of non-equilibrium thermodynamics. Eventually, after re-establishing equilibrium in the larger volume, the gas state can once again be represented by a point on our PV diagram, with temperature T, pressure P2, and volume V2. Now let's compare what happens in moving between these two states by these two processes, isothermal and free expansion. We start with volume V1 and end with volume V2, so the volume change in both cases is simply V2 minus V1. We only need to know the initial and final states to determine this. We don't need to know anything about the process that moved the system between the states. What about added heat? In isothermal expansion, as we've seen previously, the gas absorbs heat NKT, log X2 over X1, where the X's are the piston positions, and X2 over X1 equals V2 over V1. But in free expansion, no heat is absorbed by the gas. There is no way to determine the added heat by simply knowing the initial and final states. We need to know the process that moved the system between the states. This is why we call quantities like volume, pressure, temperature, and entropy state variables or, alternately, state functions. Quantities like heat and work are called process functions. For a thermodynamic cycle like the Carnot cycle, if we sum all the changes in a state variable over the cycle, we get zero. This is obvious, since starting at, say, state 1, going through the complete cycle returns us to state 1. Since the initial and final states are the same, all state variables return to the same value, and the total change is zero. The elongated S symbol with a superimposed circle represents this summation over a cycle. But the total change over the cycle of a process function, such as added heat, is not necessarily zero. For the Carnot cycle, it's NK log X2 over X1 times T hot minus T coal, which is also the net work performed by the cycle. It's common in thermodynamics to use the denotations, such as DT or DS, for an infinitesimal change in a state variable. These are said to be exact differentials, which is another way of saying that summing them up over a closed cycle is guaranteed to be zero. The notation delta Q represents an infinitesimal change in a process function. These are not guaranteed to sum to zero over a closed cycle. A variation on this applies when we consider changes over different paths between distinct initial and final states. For example, suppose we go from state 1 to state 3 of the Carnot cycle. Path A takes us along the forward cycle through state 2. Path B takes us along the reversed cycle through state 4. The sum of changes to a state variable has to be the same for both paths, because the total change depends only on the initial and final states, which are the same for both paths. This is not true for a process function. In the next video, we will use the methods of statistical mechanics to flesh out the idea that entropy is a measure of the number of ways to arrange the molecules of a substance, and that the chaotic micro-dynamics of a gas effectively reshuffles that arrangement continuously. One of our results will be an expression for the absolute entropy of an ideal monatomic gas. Curiously, although the analysis is completely in the realm of classical physics, we will see that quantum mechanics sneaks into the final result.