 So far, we have used the ideas of statistical mechanics to derive the Sacher-Tetrode equation for the entropy of a monatomic gas of n atoms in a volume v at a temperature T. This is s equals kn times the quantity log of 2 pi m kT over h squared to the 3 halves times v over n plus 5 halves. Here k is Boltzmann's constant and m is the atomic mass. We have one final issue to resolve. How do we choose the value of the parameter h? The other parameters k, n, m, and T have physical significance. But h, the area of a phase space rectangle, seems to have been merely introduced to allow us to apply our balls and boxes formula to the gas phase space. If we use our formula to calculate the entropy change in going from state 1 with temperature T1 and volume v1 to state 2 with temperature T2 and volume v2, all of the constants in the argument of the logarithm cancel. This leaves us with the result we derived in video 4 using thermodynamic arguments. S2 minus S1 equals kn times the natural log of T2 over T1 to the 3 halves power times v2 over v1. But if we want a formula for the absolute entropy of a given state, then we need a definite value for h. We might argue that there is no physical basis for dividing phase space into finite cells. It was only a math trick that allowed us to apply the balls and boxes formula, and we need to let h go to 0 to describe continuous phase space. But if h goes to 0, the argument of the logarithm goes to infinity, and the entropy goes to infinity. Maybe this is telling us that our derivation was fundamentally flawed. Otherwise, we need a way to choose a finite h value. One approach would be to experimentally determine the absolute entropy of a gas system, and then adjust h in our formula to agree with the measurement. Or maybe there is some theoretical basis for the value of h. At the time Sacker and Tetrod were working on this formula, there was a good candidate for the smallest unit of action in a mechanical system. Planck's constant proposed in 1900. This has the very small but non-zero value 6.6 times 10 to the minus 34 joule seconds. Planck's constant was originally applied to electromagnetic radiation in the quantum hypothesis, according to which the energy of an electromagnetic field oscillating at frequency nu can only be exchanged in integer multiples of energy E equals h nu. But in 1907, Einstein showed that the quantum hypothesis applied to mechanical oscillations explained the low-temperature heat capacity of solids. So Planck's constant seemed to represent the smallest unit of action in electromagnetic and mechanical phenomena. And in the 1920s, the development of quantum mechanics produced the uncertainty principle. Delta x, delta px is greater than or equal to Planck's constant. This tells us that it is not possible to specify the location of an atom in phase space by a rectangle of areas smaller than Planck's constant. So what started out as a math trick we used to divide phase space into cells is actually a physical requirement of quantum theory. Now, let's see if the Sacher-Tetrot equation actually works by comparing its predictions to experiment. Suppose we have a substance inside an insulated box. A thermometer measures the temperature. Inside the box is a resistive metal coil connected by wires to a voltage source outside the box. The source drives a current i at voltage v through the coil. This produces heat inside the box at the rate of v times i watts. Over a time dt, this generates heat delta q equals v i dt. As the system evolves, we can measure the temperature big t as a function of time little t. Adding heat delta q at temperature t increases entropy by ds equals delta q over t. If we assume zero temperature and zero entropy at time zero, then the absolute entropy at any time is the sum of all these changes, the integral from zero to time t of v i dt over t. In this way, we know both temperature and entropy as a function of time and therefore entropy is a function of temperature. As a practical matter, different experimental setups need to be used for different temperature intervals and the results pieced together. The assumption of zero entropy at absolute zero temperature is a statement of the third law of thermodynamics and is the subject of the next video in this series. Here is the molar entropy versus temperature curve for mercury at standard atmospheric pressure. S in joules per mole kelvin is plotted versus temperature t in kelvin. Recall a mole contains Avogadro's number of atoms, about 6 times 10 to the 22. Starting at s equals zero for t equals zero, the curve continuously increases with temperature. Then at the point labeled melt, there is a sudden jump in the curve as solid mercury transitions to liquid form. From there the entropy continuously increases with temperature until we reach its boiling point, where there is a very large jump as the liquid transitions the gas. These jumps are respectively due to the so-called latent heats of fusion or melting and vaporization or boiling. When a substance undergoes one of these phase changes, heat energy is absorbed and converted into increased potential energy of the weakened interatomic forces. This absorbed heat does not change the temperature, but it does increase the entropy. Hence the jumps in the entropy curve. Curves for other elements have similar characteristics, but with different melting and boiling points. Here is the entropy curve for the noble gas krypton. The melting and boiling points are only about 4 kelvin apart, so the jumps are much closer together than for mercury. Now let's look at experimental and theoretical entropy values for 4 different gases. For neon at 27.1 kelvin, the experimental entropy in joules per mole kelvin is 95.0, while the sacchar tetrod equation predicts 96.4, an error of 1.47%. For argon at 87.3 kelvin, the experimental value is 128.8, while the theoretical value is 129.2, an error of 0.31%. For krypton at 119.81 kelvin, experimental and theoretical values are 145.2 and 145.0 respectively, an error of minus 0.14%. And for mercury at 629.88 kelvin, we have values of 190.3 and 190.4, an error of only 0.05%. The predictions of the sacchar tetrod equation are in excellent agreement with experiment. This gives us confidence in our theoretical development of statistical mechanics. It is also a confirmation of the fundamental principles of quantum mechanics. This is quite surprising, since our entire development was in the realm of classical mechanics. But the success of the sacchar tetrod equation tells us that in a very real sense, there is a smallest meaningful area of phase space, specified by Planck's constant. The accuracy of the sacchar tetrod equation is remarkable, given the simplicity of the physical model it is based on. Our gas model consists of little billiard balls bouncing around inside a box. There are no inner atomic forces other than elastic forces during collisions. In particular, the model contains no chemical forces that could account for phase changes. If we cool our model gas, all we get is a colder gas. The atoms will never condense to a liquid or freeze to a solid. But the experimental entropy curve tracks entropy increases for the solid and liquid phases, as well as for the latent heats of fusion and vaporization. Yet at the point at which the material becomes a gas, our simple model accurately predicts the sum of the entropy changes through all these phase changes. This is a powerful demonstration of the concept of entropy as a state variable, or function of state. The entropy of a gas depends only on the state of the gas, and is independent of the processes that led to that state. Therefore, our model only needs to give an accurate representation of this state. It does not need to explain other possible states of the substance. Of course, this also points out a limitation of the sacchar tetrod equation. It is only applicable to states in which the substance is a gas. If, for example, we plug in a temperature at which the actual substance will be a liquid, the equation will not accurately predict the observed entropy. Now, let's look at another experimental test of our results in the form of the Maxwell-Boltzmann speed distribution. The probability that an atom will be in phase space cell i is proportional to e to the minus epsilon i over kT, where epsilon i is the kinetic energy of the cell. This is equal to e to the minus mVi squared over 2 kT, where Vi is the speed of an atom in the cell. Cells which have the same momentum magnitude, hence the same speed, lie in a sphere in momentum space. The sphere's radius is proportional to V squared, so the number of such cells is proportional to V squared. Therefore, the probability that an atom has speed V is proportional to V squared times the exponential factor. We call the distribution F of V and the constant of proportionality A, requiring that the sum of all probabilities equals one. We can solve for A to get F of V equals square root 2 over pi m over kT to the three halves, V squared e to the minus mV squared over 2 kT. F of V equals zero for V equals zero and approaches zero for large V. In between, it reaches a peak at speed V peak equals square root 2 kT over m. Multiplying numerator and denominator by Avogadro's number, this can also be expressed as square root 2 rT over big m. Here r is the gas constant and big m is the molar mass. Here are plots of this distribution for the noble gases Helium, Neon, Argon and Xenon at 25 degrees Celsius. For lighter atoms, the distribution is spread out over larger speeds. The Marcus and McPhee experiment of 1959 sought to verify the Maxwell Boltzmann speed distribution. Here is a schematic of the experiment. A chamber is divided into three sections. The left section is an oven where gas atoms are kept at constant temperature and pressure. In the right wall of this section is a small orifice through which atoms can pass into the middle section. In the right wall of that section is another orifice. Atoms which pass through both orifices are traveling rightward in more or less a straight line. Atoms that leave the left orifice at an angle hit the right wall of the middle section and are removed by a vacuum pump. This keeps atoms from building up and impeding the flow of atoms between the two orifices. The result is a beam of atoms flowing rightward through the right section. They strike a detector on the far right wall that counts the number of atoms per given time reaching it. The atoms in the beam have a distribution of velocities. To filter out a single velocity, rotating slotted disks are placed in the beam path. These rotate at an angular frequency omega. The slots are offset by an angle theta. For an atom to reach the detector, it must pass through the first disk slot. Then it must travel the length L between the disks in the time it takes them to rotate through the angle theta so that it passes through the second slot. If this time is T, then L equals VT and theta equals omega T. Eliminating T from these equations, we find V equals omega L over theta. So by adjusting the angle theta or the angular frequency omega, we can tune this mechanical filter to select atoms of a given velocity. Here are the results of the experiment. On the vertical axis, the intensity of atoms arriving at the detector increases from bottom to top. On the horizontal axis, velocity increases from left to right. The solid curve is the prediction of the Maxwell-Boltzmann distribution. The circles are the observed distribution. The agreement is excellent. This is another striking confirmation of the validity of our model and statistical mechanics analysis.