 We have seen this expression for how to calculate the entropy if what we know is the multiplicity, so that's useful enough. But in a chemistry context, that's often not actually what we want. If we're dealing with dice or cards or flipping coins or something, then we know the multiplicity. We know how many ways a coin can come up, either heads or tails, or how many ways our die can give us a 1 through 6, but in a chemistry problem we often don't know or don't want to take the time to calculate the multiplicity. But what we often do know is probabilities. We might know, for example, that a butane molecule might have a certain probability of being in the anti-state or the ghost state or a ghost minus state. We might know, for a given reaction, what the probabilities are that we form the product that we're interested in or some side products. We might know the probabilities that molecules are in certain energy levels. So it's common that we know the probabilities. Take some extra work to figure out the multiplicity. So what we'd really rather have rather than this equation is an expression that gives us the entropy as a function of probabilities rather than multiplicity. So that's our next task. Luckily, that's not terribly difficult to do. Nearly every chemistry problem we can think of as a multinomial problem. And as we've talked about before, we can calculate the multiplicity for a multinomial distribution. So what I mean by a multinomial problem is let's say we have 10 molecules or Avogadro's number of molecules. Then we might be interested in knowing how many of those molecules partition themselves into the ground state or an excited state or into the ghost confirmation or the anti-confirmation. Or if we have more than two options, then it's not just binomial with two choices, but multinomial. So in general, if we have n objects or n molecules and a bunch of different choices for what those molecules can do, then the multiplicity is just equal to the binomial coefficient. The number of ways of putting those big n molecules into states or confirmations or outcomes such that little n sub one of them or n state one, little n sub two of them or n state two, little n sub three of them or n state three, and so on is just this particular multinomial coefficient. So we can write the multiplicity in terms of this multinomial coefficient. If we'd like, we can write that instead of this dot, dot, dot, telling us how many factorials to calculate in the denominator. It's a little bit cleaner if we use product notation and just write the product of all the n sub i factorials in the denominator. That gives us an expression for the multiplicity. If we wanna calculate the entropy, we just need to plug this multiplicity into the expression for the entropy. So we can say that the entropy of any problem that we can write as a multinomial is k times the natural log of w, so k times the natural log either of n factorial over all these smaller factorials. Or if we'd prefer to use product notation, k times the natural log of big n factorial over the product of all the little n factorials. So the next step is to evaluate that logarithm. So we have the logarithm of a quotient, logarithm of numerator over denominator. According to the log rules is just, again, if we want to use the explicit formula involving all of the little n's, we can say this is k times the natural log of, let's put the parentheses here, k times the natural log of big n minus natural log of little n1 factorial minus natural log of little n2 factorial, and so on. Or using product notation, what that becomes is now what I have in the denominator is log of this product in the denominator becomes the sum of the logs of the small n factorials in the denominator with the negative sign because they're on the bottom of the fraction. So I could write that as k times natural log of big n factorial minus each of the little n factorials summed up overall the possible values of those little n's. So n sub 1, n sub 2, n sub 3. This is just the summation notation version of this expression, which is a little bit more compact. All right, so at this point, I'll stop writing out the explicit version with the dot, dot, dots and just continue with the version involving the summation notation. So what we want to write down is what we want to use for the entropy is k times this difference between the natural log of big n factorial and the sum of all the natural log of little n factorials. We know what to do with the log of an n factorial. We can use Sterling's approximation. So I can write k times log of big n factorial. Sterling tells us is big n log big n minus n. Likewise, the log of little n factorial is little n log little n minus n. So I've got minus n sub i log n sub i minus n sub i. That quantity appears once for each value of i. So far, so good. I can clean that up a little bit. The second quantity in parentheses here, the summation of this term in parentheses, is the sum of the n log n's and the sum of the n's. So I can break that up into two different summations, write that as still big n log big n minus big n. I have negative a sum of n i log n i. And I have negative, negative a sum of the n sub i's. That's making some progress because now we can see some cancellation. What will happen? What will happen here with the sum of the little n's? The sum of how many molecules are in state one, how many molecules are in state two, and so on. If I add those numbers up, that's just the total number of molecules. So this minus big n, negative total number of molecules, must cancel positive each of the small number of molecules, the number of molecules in each individual state. That's always going to happen when we have a binomial or a multinomial. The sum of all the n's and the denominator is always going to be the same number as the big n in the numerator. So those two terms cancel each other. And what we're left with is big n log big n minus the sum of the little n log the little n's. We can use the same trick, writing big n as the sum of the little n's. And I'll do that right here. When I rewrite this line, I'll rewrite this big n as the sum of little n's. So I've got k times the sum of the little n's. But I'm going to leave the second one of those alone. So I've got my big n has turned into a sum of little n's, each one of which gets multiplied by log of big n. And I'm going to subtract from that this term, little n log little n. And the reason I've done that expansion in the first term is now I can combine these two sums into one term, each of which involves sum of a little n. In the first part, it's multiplied by log big n. In the second term, it's multiplied by log little n. So that becomes sum over i of n sub i times log of big n minus log of little n. But that last term in parentheses now is something that we can simplify with the log rules. Log of something minus log of something else is the log of the quotient. So I'll rewrite that now. So we're still calculating the entropy. Rewriting this equation, I've got the entropy is equal to k times the summation of little n times the log of big n over little n. Summed over all possible states that can have some occupation little n. All right, that's begun to look quite simple and compact, but we can make it a little bit more understandable still. This quantity, big n over little n, will make more sense if I think about it the other way around. I say, let's ask ourselves what little n divided by big n means. That's the number of molecules in state i divided by the total number of molecules. So for example, number of molecules in state 1 divided by the total number of molecules. That's just telling us what fraction of the molecules are in state 1. N sub i over big n, that's telling us what fraction of molecules are in state i. Or if I pick one of the molecules at random, what's the probability that it will have ended in state i? So this quantity is not so interesting, but the inverse of this quantity is the probability of being in state i. But if I want to turn this fraction upside down, if I'd really rather think of it as the summation of n i times log of little i over little n sub i over big n, then because I've turned the inside of the log upside down, I just have to stick a negative sign out front. And I can rewrite that quantity now as p sub i. So I've got n sub i log p sub i. And we're almost to the point that I told you we were trying to get to at the beginning, trying to understand what the entropy as a function of the probabilities rather than the multiplicity or the little n's. One last step is to transform this little n into a probability also. If I could just take that little n and divide it by n, then I could call it a probability as well. So if I divide on both sides of this equation by big n, then on the right, I've got minus k times the sum of little n over big n multiplying log probability. And that little n over big n is the thing we're calling probability. I've got minus k sum of p sub i log p sub i. And this quantity s divided by n, that's just the entropy per molecule. So rather than the extensive entropy, the amount of entropy for the total amount of molecules, this is just the entropy per molecule or the intensive entropy. The entropy per molecule is our intensive entropy. So we'll call that s bar for the intensive entropy. So that equation, I'll put a box around that one, because that's an important equation that we'll come back to. That tells us how to calculate the intensive entropy as a function of the probabilities. And we'll work an example in just a second. But if I know what the probabilities are for occupying state number 123, confirmation number 123, product number 123, then I just sum up probability times log probability for each of those states multiplied by negative k. And that tells me the intensive version of the entropy for that system. That's very convenient because I get to use the probabilities rather than having to think about multiplicity. Notice multiplicity is long gone. I don't have to know the multiplicity to use the expression. And as a bonus, I get the intensive entropy. I don't even have to know how many molecules I had or specifically how many of them are going to occupy each state. I just need to know what the probability is of occupying each state. So for example, if we return to the example of butane, where at room temperature as we've used before, the probability of being in the anti-state is 68%. The probability of being in the Gauche plus and the Gauche minus state are both 16%. That's enough information for us to calculate the entropy per butane molecule, the intensive entropy. According to our new formula, that's just minus k times the sum of p log p. So that's minus k times 0.68 log 0.68. And 0.16 log 0.16. And another 0.16 log 0.16. So if I do the calculation in brackets, that comes out to a total of negative 0.85 is the quantity in brackets. So that points out why this expression has a negative sign in it to begin with. Each of these probabilities is less than 1. It's less than or equal to 100% chance that we're in any individual state. So the log of those probabilities comes out negative. The negative sign here guarantees that the entropy will eventually be positive. So the quantity in brackets is negative 0.85. After canceling the negative sign, I get positive 0.85 times k is the entropy per molecule of butane. If we want to, we can plug in Boltzmann's constant. If I multiply 0.85 by 1.38 times 10 to the minus 23 joules per Kelvin, we can get a number with some units on it. But for now, we're not so much interested in that numerical value as the fact that we can calculate this entropy, let's say, as a multiple of k, because we haven't yet talked about where that value of Boltzmann's constant comes from. The important thing for right now is we know how to calculate the entropy as a function of the probabilities. What that will allow us to do now that we can calculate entropy, remember where we obtained the entropy was as an extensive version and now an intensive version of the multiplicity. The multiplicity itself was not an extensive or intensive property. So in the future, when we're interested in finding out what is the likelihood that we're going to be in different states or what is the probability that certain molecules will exist in different states, then what we want to find out is the combination of molecules that maximizes this entropy that's the most probable as the highest multiplicity. Another way of answering that question will be to say not which has the highest multiplicity, but which has the highest entropy. And so with this formula for the entropy, we can figure out what the probabilities are that maximize the entropy. So that's the next step is to be able to talk about how to find the values of probabilities that are going to maximize the entropy.