 Welcome to Thermodynamics 6. In this video we explore the relation between heat capacity and the third law of thermodynamics. As we will see, the third law is intimately connected with quantum theory. Indeed, it was a primary motivation for the development of a quantum theory of matter. Let's think about what happens to the entropy of a system as its temperature approaches absolute zero. For an ideal monatomic gas, the Sacher-Tetrode equation gives the entropy as a function of temperature and volume. In the limit T goes to zero, the argument of the logarithm goes to zero, and the expression goes to minus infinity, a very strange result. We can argue that this result is not valid because gases cannot exist at very low temperatures, so the expression is not applicable in this limit. As we saw in the previous video, the Sacher-Tetrode equation accurately predicts the entropy of a monatomic gas. However, in general, when cooled, real gases will condense to a liquid and then freeze to a solid. The equation is not applicable to these states. Let's consider the problem from a more abstract thermodynamic point of view. Suppose we have a system at temperature T. We extract heat delta Q. This changes the system's entropy by ds equals minus delta Q over T. As heat is removed and the system temperature approaches absolute zero, it seems like the magnitude of these entropy increments should grow without bound, since the denominator approaches zero. So the entropy would tend toward minus infinity. Entropy equals Boltzmann's constant times the logarithm of gamma, the number of ways to arrange the components of the system. If S approaches minus infinity, then gamma would have to approach zero. But that makes no sense. The minimum possible value of gamma should be one. The system is in some arrangement, so there has to always be at least one way to arrange the components of a system. Let's look at this from a slightly different point of view. Recall the definition of heat capacity C. To change an object's temperature by dt, we have to add heat delta Q equal to C dt. In the present case, we are extracting heat, which is adding negative heat, so we have an additional minus sign. Then ds equals C over T dt. We see that the entropy increments will not blow up if the ratio C over T remains finite as T goes to zero. That is, if the heat capacity goes to zero, at least as fast as the temperature does. Let's now give one statement of the third law of thermodynamics. The entropy of a system approaches a constant value, which is zero in certain cases, as its temperature approaches absolute zero. From our present discussion, we see that a corollary of this is that the heat capacity of any system must approach zero as the temperature goes to zero. There is an intimate connection between heat capacity and the third law. Now, let's consider the heat capacity of solids, since that is the state we generally deal with at very low temperatures. In 1890, the Dullong-Petit law was proposed. Dullong and Petit noticed that the experimental heat capacity of many solids is near 25 joules per mole kelvin. For example, in the table at left are values for several metals at room temperature, ranging from 24.3 for aluminum to 26.4 for lead. Recall that one mole is Avogadro's number of molecules, about 6 times 10 to the 23rd. The plot at right shows molar heat capacity versus atomic number Z for most of the elements, which are solids at room temperature. The great majority of these values were unknown at the time of Dullong and Petit. Still, we see that, roughly speaking, the values do tend to cluster about the Dullong-Petit value, indicated by the horizontal red line. Previously, we found that our model of an ideal monatomic gas as n tiny colliding billiard balls predicts a heat capacity, specifically heat capacity at constant volume, of three halves nk, with k Boltzmann's constant. If n is Avogadro's number, then nk is the gas constant r, 8.314 joules per mole kelvin, and the molar heat capacity is 12.47. The values for helium, neon, and argon gas agree with this prediction to four digits. The Dullong-Petit law predicts that many solids have a molar heat capacity about twice this. 3r equals 24.94 joules per mole kelvin. This leads us to ask, why do some solids have twice the molar heat capacity of some gases? In place of our simple model of a monatomic gas as a collection of freely moving tiny billiard balls, we might model a solid as a collection of tiny billiard balls held together by little springs, representing chemical bonds. The balls can move, but the springs will tend to return them to their equilibrium positions in the crystal lattice. In addition to the kinetic energy due to the motion of the balls, the solids will also contain potential energy due to the compression and stretching of the springs. In the third video of the Mechanic Series, we saw that when a mass on a spring oscillates, the kinetic and potential energies of the system fluctuate sinusoidally, such that the average kinetic and potential energies are equal. So, an explanation for why some solids have twice the molar heat capacity of some gases is that in a solid, atoms store the same kinetic energy as they would in a gas state at the same temperature, but in addition the chemical bonds store an equal amount of potential energy. However, this higher heat capacity for a solid, like that of a monatomic gas, is independent of temperature. We still have the issue that the heat capacity should go to zero as the temperature does, so that the entropy at zero temperature remains finite. During the 19th century, the development of cryogenic technology enabled the measurement of heat capacity at very low temperatures. Researchers found that, as shown here for platinum, heat capacity does indeed approach zero as temperature does. Good news for the third law. And, notice that as temperature increases, the value approaches the do-long petty value of 25 Joules per mole Kelvin. But how do we explain this phenomenon? In 1907, the Einstein model provided an answer. Albert Einstein is, of course, famous for his theory of relativity, but his reputation as one of the greatest, arguably THE greatest, physicist, is due as much to his contributions to many other fields of physics, including thermodynamics. Starting with our balls connected by Spring's model of a solid, we treat each atom as a three-dimensional harmonic oscillator, able to oscillate independently in any of the three dimensions of space. In general, this is a difficult problem to analyze, because the force on each atom depends on the positions of its neighbors. So, the oscillations are coupled. Einstein made the simplifying assumption that these forces are due to the average or mean field of the other atoms. In this case, the atoms can be treated as moving independently. This is equivalent to having the springs connected to each atom in turn connected to fixed supports instead of to other atoms. This assumption avoids the problem of coupled oscillations. Now the oscillations of each atom in each dimension are independent and of the same frequency. And the thermal behavior of the entire solid is equivalent to three-n independent one-dimensional oscillators. Here's an atom oscillating in the single x dimension. Classically, this would be a sinusoidal function of time with some amplitude A and frequency nu determined by the atom's mass and the stiffness of the springs, the chemical bonds. Einstein's brilliant idea was to apply the, at that time, new quantum theory to this oscillator. Previously, quantum theory had only been applied to electromagnetic oscillations. In this theory, the energy of an electromagnetic field oscillating at frequency nu can only have the discrete values n h nu, where n is a non-negative integer and h is Planck's constant. Technically, there can also be an additive constant to the energy, but that won't concern us here. Einstein guessed that quantum theory might be a general principle of nature applying to any oscillations, even the mechanical oscillations of atoms in a solid. The constant h nu, a quantum of energy, can also be written as KTE, where K is Boltzmann's constant and TE is called the Einstein temperature. TE equals h nu over K. So the oscillator energy is assumed limited to the discrete values n KTE. In the previous video on statistical mechanics, we developed the Boltzmann distribution. This gives the occupation frequency f n of the oscillator state with energy n KTE as f n equals 1 over z e to the minus n KTE over KT. Canceling common k's, we are left with e to the minus n TE over T. The occupation frequency is the fraction of time a system will spend in a state with the given energy when the temperature is T. This can also be interpreted as the probability that the system will be in that state. For convenience, let's set a equal to e to the minus TE over T. This is less than 1. Then f n equals 1 over z a to the n. The occupation frequencies have to sum to 1 because the probability that the oscillator has any possible energy is 100%. This requires the normalizing factor z, the so-called partition function, to equal the sum from n equals 0 to infinity of a to the n. The sum evaluates to 1 over 1 minus a. Now we can calculate the average energy of the one dimensional quantum oscillator. This is the sum from n equals 0 to infinity, that is the sum over all possible energy states of f n, the probability of that state, times e n, the energy of that state. This is KTE over z times the sum of n a to the n. This sum evaluates to a over quantity 1 minus a squared. Substituting for a and simplifying, we get KTE over e to the TE over T minus 1. If we set KTE equal to 1 and call T over TE x, this has the functional form 1 over e to the 1 over x minus 1, with x proportional to temperature. Here's a plot of that function. For large x, large temperature, the function becomes a line. This is the classical result that the energy of a system is proportional to temperature. The slope of that line is the heat capacity, but for small x, low temperature, the curve flattens out. So the heat capacity goes to 0, as required by the third law and observed experimentally. For a solid of n atoms, each of which can oscillate independently in three dimensions, the average energy is 3n times our 1d result. So u equals 3n KTE over e to the TE over T minus 1. Let's look at the high temperature limit. T goes to infinity. In this case, TE over T goes to 0. Since e to a small exponent is well approximated by 1 plus the exponent, e to the TE over T can be replaced by 1 plus TE over T. The result is the system energy u reduces to 3n KT. The rate of change of u with respect to T is the heat capacity, C equals 3n K, the prediction of the Doulomb-Petit law. Now let's look at the low temperature limit. T goes to 0. In this case, TE over T goes to infinity. Since e to a very large exponent is a huge number, the denominator, e to the TE over T minus 1, is the rate of change of u with respect to TE over T minus 1 is essentially just e to the TE over T. Writing 1 over e to the TE over T as e to the minus TE over T, we find u goes to 3n KTE e to the minus TE over T. The slope or derivative of this function can be calculated with a computer algebra model. We find a heat capacity of 3n K, quantity TE over T squared, e to the minus TE over T. This goes to 0 as T goes to 0. For reference, the exact heat capacity of the Einstein model is given at top in blue. Here's the Einstein model compared to the observed heat capacity of platinum. The agreement is good for higher temperatures. At low temperatures, the Einstein model has qualitatively correct behavior, with heat capacity decreasing to 0, but it decreases too fast and fails to give quantitatively correct results. Still, it indicates that the reason heat capacity decreases to 0 with temperature is the quantum mechanical nature of matter. The reason why quantum theory predicts zero heat capacity at zero temperature is as follows. The entire solid is thermally equivalent to 3n one-dimensional quantum harmonic oscillators. The energy of a single oscillator can only be 0, KTE, 2KTE, and so on. At temperature T, the thermal energy available in a single interaction is on the order of KT. If T is much less than the Einstein temperature TE, then almost no interactions will have enough energy to excite an oscillator from its ground state to its first excited state with energy KTE. So the oscillators will generally not be able to absorb energy. This results in the entire solid generally not being able to absorb energy, which means its heat capacity is 0.