 Welcome to Thermodynamics 5 – Statistical Mechanics. In the previous video, we saw that the entropy change of an ideal monatomic gas undergoing free expansion could be understood in terms of a simple statistical model. Entropy equals Boltzmann's constant times the logarithm of the number of ways the molecules can be arranged within the volume. In this video, we want to expand this idea into a rigorous, detailed description of the state of an ideal monatomic gas, using the methods of statistical mechanics which we will develop. This is a challenging problem. When faced with a difficult problem, one approach is to start with a toy problem, a problem that has some similarities to the hard problem but is very much simpler to analyze. Our toy problem will be flipping coins. Suppose you take four coins, shake them in your hand, toss them on a table, and count the number of heads and tails. What will you find? Well, I did that, and here are the results. Three coins show heads, and one coin shows tails. And if you use US penny, nickel, quarter, and dollar coins, the nickel will be the one that shows tails. But of course, this is not the type of experiment that has a single predetermined outcome. Theoretically, if we knew exactly how the coins were shaken and tossed, we could in principle solve the equations of motion and predict how they would land. But in practice, the best we can do is to think of a given outcome as a realization of a random process, and calculate probabilities of the various possible outcomes. That is, we analyze the mechanics in a statistical rather than a deterministic manner. So, let's do that. By first considering how many ways we can get zero heads and four tails. The answer is, one way. All the coins show tails. We represent the coins by different colored arrows, with an up or down arrow corresponding to heads or tails. How about one heads and three tails? We can sketch out the four ways this can happen. Any one of the four coins can show heads, and the other three tails. For two heads and two tails, we find six ways. The three heads and one tails case is the mere image of the one three case, and the four zero case is the mere image of the zero four case. Let's call the specific sequence of heads and tails the microstate of the system, and the total number of heads and tails the macrostate of the system. There are 16 possible microstates, and we want to calculate the probabilities of the five possible macrostates. To do this, we need to make an assumption. We'll call this our fundamental postulate, which is, all microstates are equally probable. This will be true if each coin is fair, meaning it has 50-50 probability of landing heads or tails, and if the coins are independent. That is, the outcome for one coin has no effect on the outcome for another coin. Of course, this does not have to be the case. You can imagine subtle changes in the shape of a coin, causing the aerodynamics to slightly favor one face landing up, and if the coins were slightly magnetized, they would tend to influence each other's dynamics. However, we postulate that the coins are fair and independent, so that all microstates are equally probable. We can test this by seeing if it leads to accurate predictions. Assuming our postulate is valid, the probability of any macrostate equals the number of microstates that produce that macrostate, divided by the total number of microstates. So the probability of the zero four and four zero states is one over sixteen. The probability of the one three and three one states is four over sixteen, and the probability of the two-two state is six over sixteen. The brute force approach to calculating probabilities is only practical for a small number of coins. By considering the process of distributing a number of balls among various boxes, we can derive simple mathematical expressions for the probabilities of, among other things, the coin flip problem. Suppose we have n equals twelve balls, each a distinct color, and we put a equals three of them in a box. How many ways can we do this? That is, how many unique a-ball subsets can be formed from a set of n-distinguishable balls? Imagine the following procedure for selecting balls to go in the box. Place the n balls in a bag. Then by randomly selecting balls one at a time, order them in a row. The first a-balls then go into the box. There are n factorial ways to order, or permute, n balls, with n factorial equal to n times n minus one times n minus two, etc., down to one. Because for the first ball there are n possible selections, for the second ball n minus one possible selections, and so on. However, this does not mean that there are n factorial a-ball subsets that can end up in the box. Many of these permutations will have the same a-balls in the box, with those a-balls and or the n minus a other balls in a different order. There are a-factorial ways to reorder the a-balls in the box, and n minus a-factorial ways to reorder the balls outside the box. Therefore, there are a-factorial times n minus a-factorial permutations that have the same a-balls in the box. So the number of unique a-ball subsets that can be chosen from n-balls is n factorial divided by a-factorial times n minus a-factorial. We call this n choose a, and denote it by n above a inside parentheses. Let's apply this result to our coin flip problem. Let n equals four balls represent the four coins, and the a-balls in the box represent the number of coins that show heads. Then four choose a is the number of ways for a-heads to show. For a equals one, four choose one is four factorial over one factorial times three factorial, which equals four, and so on. There is a subtle point for a equals zero or four. We get a factor of zero factorial in the denominator. There are good mathematical reasons why zero factorial equals one, but for our purposes we can simply define zero factorial to be one, so our results match what we found by brute force counting. The number of microstates for a equals zero through four are one, four, six, four, and one. Now that we have a mathematical expression for the number of microstates corresponding to a-heads showing when we flip n-balls, we can simply plug in values for n and a, and let a computer calculate the results. Here is a plot of the results for n equals four, and a varying from zero to four. Here is the plot for n equals 40. The number of microstates for a equals 20 is almost 140 billion. Obviously, we weren't going to figure that out by counting all the microstates. For n equals 400, the a equals n over 2 microstate has more than 10 to the 119 microstates. That's 10 million trillion Google, a more than astronomically huge number. Notice how as n increases, the n choose a curve becomes more narrowly peaked. In fact, it approaches a bell curve, and we can make the following statement. 99.99% of the microstates correspond to a macrostate with a over n differing from one half by no more than two over square root of n. For the case shown, 99.99% of the microstates have a between 160 and 240. So as n approaches infinity, statistics become destiny. For practical purposes, the state of the system will not deviate significantly from the most likely macrostate. The number of microstates for a equals 200 is 400 choose 200 equals 1.03 times 10 to the 119. The total number of microstates is the sum over a from 0 to 400 of 400 choose a equals 2.58 times 10 to the 120. At the end of the last video, we saw evidence that the entropy of a system equals Boltzmann's constant times the natural log of the number of ways its components can be rearranged, the number of microstates. For our current system, to three digits, the log of the number of microstates corresponding to the most likely a equals 200 macrostate is 274. The log of the number of all microstates is 277, only about 1% greater. As n grows, this difference gets even smaller. Therefore, in practice, to calculate entropy, we can use the log of the number of microstates corresponding to the most likely macrostate in place of the log of the total number of microstates. Now let's extend our balls in a box scenario to allow for multiple boxes. Given n balls, we choose a1 to put in box 1. There are n choose a1 ways to do this, leaving n1 equals n minus a1 balls. We choose a2 balls to put in box 2. There are n1 choose a2 ways to do this, leaving n2 equals n1 minus a2 balls. We choose a3 to put in box 3. There are n2 choose a3 ways to do this, leaving n3 equals n2 minus a3 balls. Finally, let's place all a4 equals n3 remaining balls in box 4. There are n3 choose a4, which is just one, ways to do this. The number of possible arrangements is then the product of these four n choose a factors. The n minus a1 factorial in the denominator of the first factor cancels the n1 factorial in the numerator of the second factor. And this pattern repeats for all subsequent factors. An n3 minus a4 factorial is 0 factorial equal to 1. This leaves n factorial over a1 factorial times a2 factorial times a3 factorial times a4 factorial. Which is easily generalized for any number of boxes to the following expression. Let gamma be the number of ways to distribute n balls among some number of boxes, with a1 balls in the first box, a2 in the second box, and so on. Then gamma equals n factorial over a1 factorial times a2 factorial, etc. This is the fundamental result we need in order to apply the balls and boxes model to the dynamical states of atoms in a gas.