 Okay, everybody, have a good Easter weekend. So you did well on quiz one. I don't know if you had a chance to look at the grade book, but out of 113 of you who took the quiz, 98 got some flavor of A. Mean was 4.5 out of 5, so that's an A. And so, that's fantastic. Keep it up. All right, quiz two is gonna look a lot like quiz one in terms of the format, there'll be five questions and I'll have more to say specifically about what's on it on Wednesday. There's a key to quiz one that's been posted on the results page of our website if you wanna take a look at that. You can be Cypher, GM Marks, handwriting. Here's the histogram, thing of beauty. Okay, so what we wanna do today is review what we've already learned in statistical mechanics briefly, okay, to make sure that we're all on the same page. The first thing that we learned right at the beginning of last week was isolated systems of N molecules with discrete states, be they electronic, vibrational, rotational. We didn't say anything about the origin of these states, but of course the molecules, they can be any or all of these things. These molecules assume microstates that can be grouped into configurations having a defined number of microstates. We called that W. So we talked about this thing called microstates. We talked about families of microstates called configurations. And one thing we didn't say is that it's a fundamental possibility of statistical mechanics that over a long period of time each microstate of a system is equally probable. I don't think we said that until now, but that's a rather profound statement. One thing that we did say last week was that this special configuration that is most probable, the reason it's most probable is because it has by far the most microstates and the reason that makes sense is because each microstate in this system is equally probable as a function of time. As N increases, as the number of molecules increases the predominant configuration, Tc emerges that predominates over all of the other configurations. The predominant configuration is the largest W of all possible configurations. We talked about three molecules containing three quanta of energy and we found immediately a predominant configuration. We talked about four coin flips and we found a predominant configuration. Even though the system's really tiny, we can identify a predominant configuration in these systems. And we pointed out that as the number of molecules increases and becomes more and more macroscopic, all right, the degree to which this predominant configuration predominates becomes greater and greater and greater to the point where imagine how sharp this distribution will be when this number is 10 to the 23 instead of 1,000. There's gonna be an extremely small area, if you will, of configuration space that we're gonna have to consider in terms of understanding the behavior of that system. And we said the Boltzmann distribution law describes the properties of this predominant configuration. It doesn't describe the properties of all configurations. It only describes the properties of the predominant configuration. In particular, it specifies how the energy levels instead of the predominant configuration are occupied as a function of temperature. Here's the Boltzmann distribution law in its simplest form, okay. And we also said that a particularly useful form of the Boltzmann distribution law involves the definition of something called the partition function q. And here's that version of the Boltzmann distribution law, two different versions of it. All right, in this case, we're talking about a summation over i states, each energy level can have multiple states, we call that the degeneracy, or we can take the summation over energy levels instead of states, right. And in that case, we have to multiply by the degeneracy explicitly, right. So these two equations are identical. It just depends on whether we're talking about individual states or energy levels. No difference between the two equations. It's more convenient to use this if we know about the degeneracy. Finally, we said the first order, the partition function is the number of states that are accessible to the system at a particular temperature. That's how we define the partition function. But it actually trivializes what it is because what it really is, is it's the analog and statistical mechanics to the wave function, right. The wave function, as you know, contains all of the quantum mechanical information of the system. We can use operators to extract information on the momentum, the position, and so on of our quantum mechanical system from the wave function. The wave function contains all of the quantum mechanical information of the system. And the partition function does that for statistical mechanics. It contains all of the thermodynamic information on the system, right. The partition function is a profound concept. Just like the wave function is in quantum mechanics, it has the same lofty status in statistical mechanics. So, can that be true? All thermodynamic information embedded in the partition function, the partition function isn't that big a deal. It's a summation. Well, we showed on Friday that, for example, you can calculate the average internal energy of the system using the partition function. We derived a nice expression for doing that. Here's the average internal energy of the system. It's just the total energy divided by the number of molecules and you can do a little algebra. We can make a substitution for n from the Boltzmann distribution law. And when we do that, we get this expression here and then if we recognize that this guy is just a derivative of the partition function with respect to beta, why we get these really simple expressions for the total energy where we're multiplying by big n and the average internal energy where we're not doing that, right, and this is just the partial derivative of the partition function with respect to beta and this is just one over the partition function. The partition function has this information in it, right. If we know how to ask the partition function for the information, right. If we know how to ask the question, this is, here we're asking the question, what is the internal energy of a single molecule? Okay, so we've learned about one case where we can extract this information from the partition function. We're going to be learning about other cases where this is also true. So we derived this equation on Friday. Now, let me just point out to you that this might be the least intuitive equation in all of chemistry, right. By no means should you look at this equation and go, oh yeah, that's obviously the average internal energy of a molecule, all right. One over the partition function should the partition function get larger if the energy gets larger, but it's one over, oh, and there's a minus sign in front of it, all right, and it's the partial derivative here of the partition function with respect to beta which is one over temperature. I have zero intuition regarding this equation. If you have more than zero, you have more than me, all right, but we can derive this equation. We have to trust the mathematics that we've done. It wasn't a complex derivation. We just did it again in like three slides, all right. Even though this equation has no intuitive basis to it, all right, it does in fact tell us what the average internal energy of a molecule is, all right. We can trust it. We derived it ourselves for goodness' sakes. But don't imagine that you should have some magical intuition about this equation, all right. I don't, I don't expect you to. It's not an intuitive equation to me at all. Now we didn't finish this example on Friday. We did a, we evaluated the term populations, all right, but we didn't calculate the electronic contribution to the molar internal energy at 300 degrees Kelvin because we ran out of time. Let's do that because it takes us to this concept of energy. So we said NO has electronic states at zero and at 121.1 wave numbers and they're both doubly degenerate. And so in principle, this is the equation that we should be able to use to calculate the electronic contribution to the molar internal energy. And one thing we did on Friday is we calculated this curve. This is the partition function as a function of temperature and we derived an equation for that that I'll show you in just a moment. And we plotted that and here's what the partition function does and you'll recall that NO has four states and so the partition function has an asymptote at four. It can't be larger than that. And it can't be smaller than two because we're always going to have two states that are thermally accessible. And so this thing makes intuitive sense because it starts at two and it has an asymptote. It looks like it's not 100% clear but here's four. All right, this is asymptotically going to approach four at very high temperatures. So qualitatively, this partition function is doing what we expect it to do and we're interested here in the internal energy at 300 degrees Kelvin so if I look at 300 on this plot and I just go to the curve that we've derived, the partition function at 300 is 3.119 and I'm going to need that number in just a second. Okay, so if I want to evaluate now dQd beta, that term right there, all right, I can do that, all right. I'm going to take dd beta of Q. Here's the partition function that we derived on Friday. It's got two terms, one for each state, one for each energy level I could say. All right, there's two, that's the degeneracy of the ground state, there's the energy of the ground state, there's the degeneracy of the excited state, there's the energy of the excited state, so this is our partition function and if we're going to take the derivative, that zero is going to move out front here so this term is just going to go away for us, right, and we're only going to consider this term right here, right, and so 121.1 is going to move out front and that minus sign is going to move out front as well, okay, and so this derivative dv beta is just equal to minus two times that energy times e to the minus beta, that's 121.1 wave numbers. Now, we want to know what that is in joules and so we have to do the usual unit conversions and so this is my clunky way of doing that which I find always works for me. There's Boltzmann's constant, there's the temperature that we're talking about and so if we plug these numbers in, this is what we get for dq d beta, all right, here's n, here's q which we just pulled off that plot, okay, and there's a minus sign here that's going to cancel with that minus sign right there and so we're going to end up with 519 joules per mole for the total energy of one mole. That's 6.022 times 10 to the 23rd because we're calculating per mole here, okay, now. It's called it's n is one mole. Where you told n was one mole. All right, is n always just up about those numbers? I don't remember, molar internal energy, okay. Now, does this make sense? Always ask that question, all right, does this number make sense? Well, in this particular case, we can figure this out in a rather detailed way because on Friday, we worked out what the term populations are. We said 36% of the molecules have an excited state or are in the excited state at 121.1 wave numbers, the rest of them are not, they're in the ground state. Well, these ground state molecules don't contribute to the internal energy because the ground state's got zero energy, only the excited state contributes to the total energy of the molecule, okay. So if I take this 0.36 and I multiply by a mole of molecules and I multiply that by the energy of the excited state and convert it to joules, I should get the same answer that I just calculated for the internal energy and I get darn close, the difference between this and 519 is just rounding here. So yes, 519 joules per mole is probably correct because I can do this back door calculation of the same quantity, I can calculate the total amount of energy in this one mole of molecules based on how big the excited state is, what its energy is and what fraction of the molecules are in that excited state and if I had carried a few more sick figs here, this would be exactly 519 instead of 520. Everybody see what I did? Okay, so we can use the partition function, we can, if we ask the partition function in the right way, it will tell us what the average internal energy is, we can calculate that for a mole of molecules or for a single molecule, no problem. Okay, now there's a confusing subject that's discussed on page 429 of your book, not that all of this isn't confusing, I mean quite honestly it is, but this is more confusing than most of this, right? And it concerns the classification of molecular ensembles as micro canonical, canonical or grand canonical and then what's constant when we consider such different canonical types of ensembles and what's the partition function and why does it have these different, if you're not confused by this, you're just not paying attention, this is very confusing. So, let's talk about this in a little more detail. What we've been discussing so far in the last week is this thing here, micro canonical ensembles, a micro canonical ensemble which is what we've been talking about, applies to individual molecules, one member of an ensemble of molecules. We've been talking about one molecule, all right? The partition function asks how many states in each molecule are thermally accessible at a particular temperature, all right? So, the micro canonical ensemble, the micro canonical ensemble is the ensemble of states that could exist for this molecule given that it has a certain number of quant of energy. It, in some ways, it should be obvious to us that we've been talking about one molecule because, look, there's two states in this molecule that are always occupied, four that could possibly be occupied. So, this Q, we've really been just talking about one molecule, if we've been talking about two, then this would be four, five, seven and eight, not two, three and four, right? There'd be a larger number of thermally accessible states because we'd be talking about multiple molecules, all right? But that's not what we've been doing. We've been just talking about one molecule this whole time. The Boltzmann distribution law has two forms, blah, blah, blah, blah, but notice in either one of these two partition functions is the number of molecules mentioned. We've just been talking about one. There's, they're independent of N. We don't contain N. Okay. So, in essence, this micro canonical partition function has just been asking the question, what's the probability that N equals zero is occupied? What's the probability that N equals one is occupied? N equals two, add up those probabilities, boom, you get the micro canonical partition function that we've been talking about so far. Okay, so now let's define the partition function for an ensemble of molecules, more than one. We're going to call it big Q instead of little Q so that we never confuse these two things in spite of their intrinsic confusion, ability to confuse us. Notice that this is the energy of a single molecule. This is the energy of all the molecules, big E, not little E. These summations play out of quantum states for the assembly of molecules, these summations right here. All right, so the question is how can we express big Q in terms of little Q, right? How can we express the canonical ensemble? Partition function for the canonical ensemble in terms of the partition function for the micro canonical ensemble. All right, let's figure this out because we want this to make sense, even though it's intrinsically confusing. Here, let's consider just two molecules, A and B. Let's write the big Q for these two molecules now, right? It's this summation that contains the energy for both molecules now and so we can write that in terms of molecule A and molecule B. E sub I is just E A plus E B, that's all the energy in A and all the energy in B. Oh, yeah. Okay, so we've got two molecules in our system now. What does this summation refer to? Well, it refers to, all right, and there should be, okay, I left the parentheses out here. So B should be multiplying this whole thing here, parentheses here, here, also parentheses here. All right, let's come up. Okay, so we want to consider all the possible states. Molecule A can be in its ground state, so can molecule B. Molecule A can be in its ground state when molecule B is in its first excited state or second excited state. Likewise, molecule A could be in its first excited state when molecule B is in its ground state and so on, we have to consider all of these different combinations. That's what these summations are. Now, it turns out that if I look at this, all right, at the end of the day, all I'm doing is I'm multiplying these two micro-canonical partition functions together. All right, here's the micro-canonical partition function for A and the micro-canonical partition function for B. If I just multiply them together, I've got little Q A times little Q B, that equals big Q A B. Okay, so for this two molecule system, I've got two partition functions. If I've got n molecules, I'm going to take my micro-canonical partition function to the nth power. Here I'm taking Q to the second power, right, if these two molecules are identical, it's just little Q squared, all right. In general, if I've got n molecules, I'm going to have an n in the exponent here, all right. This is the appropriate expression when we've got n distinguishable molecules. n distinguishable molecules. What's a distinguishable molecule? Well, hypothetically, this molecule was located in a lapse. Now, we could keep track of its position. Then, this would apply. Physicians on the crystalline lattice. This is rarely the case, as it turns out. We're still rarely able to label molecules and keep track of individual molecules in the system. We can understand that. So, what if they're not distinguishable? That's the more general case. What if we can't keep track of which molecules which? What happens? Well, for two distinguishable units, we can tell the difference. Check out these two states right here, all right. Molecule A is excited and molecule B is not. Molecule A is not excited, like molecule B is, all right. These are two different states for the system. One molecule A is excited, molecule B is not. One molecule B is excited, molecule A is not. We can tell the difference between this and this because we can keep track of these two different atoms, all right. We can see this guy's excited, this guy's not, and this guy's excited, and this guy's not. So, these are two different states that we can identify. But if these are gas atoms and they're zooming around in this room, we can't keep track of them anymore. If our sample is a gas, the molecules will be zooming around, we won't be able to keep track of them. And in that case, these two systems will be indistinguishable from another, in fact, we can't tell the difference. That's just one state. We can say one molecule's excited and the other one's not, but we can't say which one's which. So, if the states are indistinguishable because they're zooming around in the gas phase, for example, we'll have a smaller number of them. Because here we have two states, but if we can't tell the difference, it's just one, okay? So, our example above, big Q is going to equal one-half of little Q to the N. There will be half as many states if the molecules are indistinguishable as if they were distinguishable. If there's two molecules, there will be half as many states if there are two molecules. Let's consider another case. What if there are three molecules, A, B, and C? Okay, and let's say one has one quantum of energy, one has two, and one has three. Okay, one has one quantum of energy, one has two, and one has three, and these are all the different ways that we can configure that configuration. These are all the different microstates for that configuration, if you will. Six of them, all right? This assumes that A, B, and C are locked into a lattice so we can tell the difference between them, all right? We can keep track of which molecule contains how many quantum of energy, but if they're zooming around, this is just one state, okay? So, if A, B, and C are distinguishable, they're locked in the lattice, then there are six states here. If they are indistinguishable, there's just one, right? Three molecules, one over three factorial times Q to the N, all right? So, we multiply by N, one over N factorial to figure out how many states we lose when the molecules become indistinguishable, one over N factorial, okay? So, that explains this nonsense, okay? Distinguishable parts, we've got Q is just equal to the micro canonical partition function to the big N's power, all right? And if the particles are indistinguishable, we're going to reduce that by one over N factorial, all right? Now, this we're just going to forget about. We don't need it, and this is already confusing enough, okay? So, we're going to talk about micro canonical ensembles and canonical ensembles. We don't need a grand canonical, all right? Are you with me? So, this is cryptic, but we now sort of understand it. We can understand where these terms come from, right? Where big Q comes from, all right? Kind of important, kind of a central thing in statistical mechanics, but my goodness, more confusing than it needs to be. Okay, you ready for entropy? This is a Kenny's shoe box. How many people have ever been to Kenny's? It's a shoe store. It's the only shoe box I can find in Google Images, okay? We're going to put nickels into this shoe box, a hundred nickels, all heads up, takes a while. Turn them all over. They're all sitting in there, heads up, okay? Now, I'm going to shake the shoe box and then make sure all the nickels are flat again so I can see it. They're heads or tails. What's going to happen? They're going to stay heads. Who said that? We're wronging it. Shake inventory, all right? We didn't shake very hard, obviously. We're only a couple turned over, all right? Ninety-one heads, nine tails. Now, I'm going to shake them again. All right, this time, seventy-two heads, twenty-eight tails. I'm going to shake them again, fifty-nine heads, forty-one tails. All right, it'll sort of fluctuate around fifty-fifty, won't it, if I keep shaking it. It'll be unusual. I get exactly fifty-fifty, all right? This system is never, if I shake this system right here, it never goes back in this direction, all right? The probability of me shaking this system and getting that guy, I'll tell you what it is in just a moment, all right? Does everyone agree that we always see this system evolving in this direction? We never see the opposite. Well, let's look at the number of microstates, all right? A hundred heads for a hundred coins, all right? There's only one way for that to happen, all right? How many ways are there to have ninety-one heads and nine tails? Well, for a hundred coins, ninety-one heads and nine tails, there's one point nine billion ways to do that, trillion. What if you have seventy-two tails and, sorry, seventy-two heads and twenty-eight tails, four point nine nine times ten to the twenty-four ways to do that, all right? Is that number twice as big as that number? No, it's a billion times bigger, all right? Fifty-nine heads, forty-one tails, two times ten to the twenty-eight, factor of ten thousand, all right? Each one of these microstates is, to first order, equally probable as a function of time, according to statistical mechanics, all right? This system always evolves in the direction of increasing W, all systems do this. For any isolated assembly, we can always predict the direction of spontaneous change, is that in which W increases, right? The number of microstates always increases. For every kind of a reaction that we care about, right? For every change in state, for every change in volume, all right? It's W increasing that determines the direction of spontaneous change. This is a very important idea. So remember, any isolated system remains constant energy, so the system is optimizing in another parameter. There's no difference in energy between heads up and tails, all right? This system has exactly the same energy as this system, right? It's, you know, the difference has got to be, I don't know why there would be any difference, all right? It's very, very small between a coin that's heads up and a coin that's heads down, okay? So this has got nothing to do with energy. It's got nothing to do with energy, all right? The direction of spontaneous change is optimizing W. That's what's determining this direction of spontaneous change. Okay. So Boltzmann postulated that this thing called the entropy, all right, is the thing that's getting optimized, and he postulated the entropy is equal to his constant times log of the number of states, and he had good reasons for doing this, but I have the slightest understanding of it, okay? But he called this thing S, the entropy, and he said it's equal to K log W. This was, in his own mind, his most important contribution to science derivation of this equation. Okay. So should we do an example? We calculate the standard molar entropy of neon gas at 200 degrees Kelvin and at 298.15 degrees Kelvin, the standard molar entropy of neon gas. And the ensemble of atoms of constant temperature because it's a molar entropy and it's a gas. These molecules are zooming around. We're not going to have any chance to keep track of them, all right, so they're indistinguishable particles. All right? Can't keep track of them. So this is what's going on, all right? It's a canonical ensemble of indistinguishable particles and the partition function is going to be equal to the micro-canonical partition function, the N divided by n factorial. Okay. So how do we solve this problem? Let's start with this expression for the entropy that was derived in your book on page 185. Right? This is equation 15.2. It says the entropy is equal to the internal energy. This is just the energy of the ground state, right? So this is the total internal energy divided by temperature plus total number of molecules, transpose and constants, log of the micro-canonical partition function. This version applies when we're talking about canonical ensemble of indistinguishable molecules. Notice that N is gone because N is contained within Q. Now, let's write Q in terms of Q. Let's write this guy in terms of that guy while we have that already. I mean, we derived that earlier. We're just going to plug this expression in for that. Boom. Okay. And then we're just going to split this between two terms. We're going to pull this N factorial out, move it into the numerator and put a minus sign in front of it. All right, there. All right, so we moved it into the numerator and put a minus sign in front of it. You still have to have the K there. And then Q to the N. I can move the N out front, right? Times K is times log Q now, okay? And so I've got two terms from one just because I moved the numerator and denominator into different terms. And now we can do two things. We substitute for N. We call it Avogadro's number times the number of moles. Little N is going to be the number of moles. And we use something called Sterling's approximation for log of N factorial. There's, well, there's something called Sterling's approximation for that that allows us to write it as that, okay? So now we get this. What did I skip? Oh, remember Avogadro's number times Boltzmann's constant is just the gas constant. Avogadro's number times Boltzmann's constant is just the gas constant. And so I can make a substitution for, I'm doing two things. I'm substituting N N sub A times N for N and then I'm substituting for R also, right? That gets me this guy right here. I've got R's now instead of Boltzmann's constant. Okay, so what's Q? Well, for an atomic gas, what's the partition function for neon? What do we have to think about at these temperatures? 200 degrees Kelvin, 298 degrees Kelvin. What states are accessible to neon at these temperatures? Well, does neon have rotational states? For rotates, we don't notice it's rotated. Does it have vibrational states? And I think they're vibrate against, it's just now. Does it have any electronic states that we can access at these temperatures? There's no way for you to know that, but the answer is no. It's got no electronic states at these really low temperatures that you would need to get involved at these low temperatures, right? So all neon can do is translate around, right? It can translate, that's all it's, the only states that has available to it, right? It's got no rotational, no vibrational, and no electronic states that are accessible at these low temperatures. Now there's no way that you could possibly know about the electronic states. You'd have to be told that, but you'd know about the rotational and the vibrational states. It's an ad for goodness sakes. Okay? So for an atomic gas, translation is the only possible means what energy can store in the system, provide it, well that's not quite, that's not quite true. I mean if there were electronic states that were accessible and principal, they could contribute, but if they don't, then translation is the only way in which energy can be stored by the system. And under these conditions, the sacro tetrode equation applies. The entropy equals n times r log e to the 5 has kt over e, that's the pressure over the thermal wavelength cubed. Did I skip the derivation of this or, it's not obvious that this equation comes in any way from this equation right here. Did I leave out a couple of lines of algebra here? Apparently I did. Okay, trust me, this equation applies to a monoatomic gas, all right, what do we need to know? Well, obviously we need to know the thermal wavelength, all right. We need to know the temperature, but we already know we're at 300 degrees Kelvin. Are we missing any other information? We need to know the number of moles. We need to know the pressure, and we need to know the thermal wavelength. This thermal wavelength is given by this equation right here, which when I looked at it, that first thing I thought is if this is a wavelength and it's got units of distance, I'm missing where these units of distance are coming from, all right. Does this mess have units of distance, meters? Well, if you're not sure, you always should do a dimensional analysis, right, because among other things, the dimensional analysis will not only tell you whether you're going to get the right units out of this mess, it'll tell you what units these other parameters have to have in order to get the right units out of the equation that you need. You need units of distance, okay? So if I look at this equation right here, what's h? h has units of joule seconds, right? What's m? m has units, that's mass. That's mass of a single neon atom, right? Units of kilograms. What's units of Boltzmann's constant joules for Kelvin, units of temperature, Kelvin, okay? Now what do I have to remember? Kilograms, of course, can be reconfigured by just remembering equals mc squared, so mass is e over c squared. That's joules over meter squared per second squared, or joules second squared per meter squared, boom. Joules second squared per meter squared, that's a kilogram. I always just remember, e equals mc squared, and then joules, kilograms, meters per second. All right, you get useful converging factors that way, especially for mass, okay? And so, when I cancel all of these units here, I get meters. All right, so presumably if I use mass here, units of kilograms, joules per second, so forth, I won't get meters. It's surprising, but you do get them. Okay, so now I can put the numbers in. There's h, what is that guy? That's the molar mass of neon, all right? I want the mass of one neon atom, so I'm going to multiply by one over the number of atoms in a mole. All right, so that's going to be the mass of a neon atom and units of kilograms now, so remember that, because that's the molar mass of neon and units of kilograms, not grams. Okay, and if I run these numbers in my calculator, I get 2.748 times 10 to the minus 11 meters, or 27.5 picometers, that's a really short distance. All right, this thermal wavelength is tiny, all right? Even though we're not at a super high temperature here, all right, this is a tiny thermal wavelength, so we don't really have any intuition about what the thermal wavelength should be, all right? It's going to be tiny. It's going to be a tiny number. Now, this is maybe the first intuition that we have on that. Okay, so now we can plug everything into the sacro-tetrode equation, all right? And this is just going to be 1, R, 8.31451 joules per kelvin per mole, Boltzmann's constant temperature. What the heck is that, all right? And there's the thermal wavelength right there, cube, all right, what is that? It's the number of Pascals in 1 ATM. Now we're talking about standard molar entropy. That means 1 ATM and that means 101.325 Pascals, all right? That's a good number to keep in the back of your mind. Okay, so standard means 1 ATM, that's 101,325 Pascals and a Pascale meter cubed is a joule and so if I run these in my calculator I'd get 138 joules per kelvin per mole. Entropy is always joules per degree kelvin per mole, right? Not like the energy, joules per mole, right? Joules per kelvin per mole if it's the entropy, all right? And if I increase the temperature slightly to 298.15, room temperature I get 146 joules per kelvin per mole. Okay, we're going to do more examples of the entropy on Wednesday. Now one confusing thing is that Chapter 13 doesn't say anything about the entropy, neither is Chapter 14, and Chapter 15, we're jumping around a little bit between chapters 13, 14 and 15, right? Is everyone sort of comfortable with that? So it'll be obvious what the topics we're emphasizing as we do this, okay? So we'll see you Wednesday.