 Okay, we're going to get started now. All right, so last time at the very end, I told you about a quantity known as the binomial coefficient and the function in Mathematica is like this, binomial coefficient with two arguments which I refer to here as n and m and the definition of the binomial coefficient is given here in terms of n factorial, m factorial and n minus m factorial. There's a symmetry to the binomial coefficient. They're sort of the same when the numbers are equally spaced around the middle value. So it turns out that n over m is equal to n over n minus m and we'll see this in a minute when we evaluate the binomial coefficients with Mathematica. Now, does anybody remember what's the interpretation of this number or what is this number? Okay, I'll remind you. So we suppose we have a list of n distinguishable objects and we wish to separate them into two lists, one containing m elements and the other containing n minus m. And this binomial coefficient is the number of ways that we can do that if we don't care about the order in which the elements appear in the smaller lists. So let me give you an example of something that you can calculate using the binomial coefficient. So maybe you'll have a better feeling for what it means. So I'm going to suppose that I'm going to flip the coin four times, okay? And each time I flip the coin there's two outcomes. It's either heads or tails. And what I want to know is, what I want to list here is all the ways that I can flip the coin four times and get only one tails, okay? So we can make that list. I could have the first coin first flip be a tails and the rest heads or the second one or the third one or the fourth one. So there's four ways. Now, but I want you to see if you can see here that that number should have been given by the binomial coefficient four over one. Because I can think about this, an equivalent question would be, how many ways can I break up a list of four elements into two lists, one containing one element and the other containing three without regard for the order? It's the same question. And we'll evaluate the binomial coefficient four or one and also four or three in a second to see that that's true, okay? So let's go ahead and have a look to see if that is indeed the case. And we'll talk about coin flips a little more in a second and we'll need to make sure that we feel comfortable with that statement. So I'm going to say binomial coefficient and then I'm going to put four. That's the number of coins, coin flips. And then I'm going to ask, I want to have one of my lists contain one, okay? So I get, whoops, that's to, oh, I'm sorry, it's not binomial coefficient, it's binomial, okay? So you see I get four. And as I said, you'll get the same thing if you put in four minus one, which is three, okay? So there's that symmetry. You can actually construct an object known as Pascal's Triangle from binomial coefficients. You might want to look that up. It's kind of an interesting mathematical result. Okay, so now let's do something, let's try it again. So now let's suppose I take 10 coins, 10 flips of a coin and I want to know how many ways are there for me to get exactly four heads. So what quantity should I evaluate? Binomial, 10, 4. Anybody have a guess as to how many that might be? Do you think it's a lot? Well, let's have a look. So binomial, whoops, 10, 4. 210 ways to get exactly four heads if I flip the coin 10 times. And just to check, it should be the same as binomial 10, 6, okay? All right, now I want to ask an even more specific question, again, about the coin flip. So let's go back over here and let me correct this. Sorry, that's not supposed to be there, okay? So the next question I want to ask is what's the probability of such an outcome, okay? So what we looked at here was how many ways can I flip the coin four times and get exactly one head? Now I want to know what's the probability out of all possible flips of a certain number of coins to get a certain outcome, all right? So to do that, I want to start with an even simpler example. So now I'm going to flip two coins and I'm going to look at all the possibilities, all right? So one possibility is both of them are heads. Another possibility is one is heads and one is tails, the second one. Or I could have got tails on the first flip and heads on the second and then the last possibility is that I get two tails. So there's four possible outcomes. What's the probability of getting two heads? There's only one way to do it, four possible outcomes. So the probability is one fourth. Now what if I ask what's the probability of just getting one head and one tail and I don't care about the order? Then I should combine these two and I get, yeah, one half or two fourths and then finally for the two heads it's also a fourth. And I want you to notice a couple of things. So first of all, if we did this right, these should add up to one and they do, okay? And the second thing is that I want you to notice that this is actually, no, sorry, that this is actually equal to one half times one half. Now why do I want to point that out? Well, because if I flip one coin, the probability of getting a heads is a half, okay? And if I flip two coins and I ask the question in the following way, what's the probability of getting heads on the first flip and heads on the second flip, the rule of constructing that probability is to multiply the probability for the independent events, okay? So the one fourth I could have equivalently gotten by saying, oh, I want a heads and a heads. So that should be a half times a half, okay? So or a half squared. Now the next thing I want to do is construct a more general result, okay? So I want to ask what's the prob of getting m heads, oh no, yeah, m heads in basically n flips, okay? So m is going to be a precise number, less than n, okay? Now let's think about what we have to know in order to get that. Well, first of all, what we have to know is that the probability of getting m heads is one half to the power m, but at the same time we have to get n minus m tails and since the probability of tails is also a half, we have a half to the n minus m, okay? So this is the probability of getting m heads and so multiply n minus m tails, got it? Okay, now there's one other thing we have to do and that is we have to multiply by the number of ways of dividing up the flips into two lists, one containing exactly m heads and what's that number? That's the binomial coefficient n over m and you can see that here the m's will cancel in the exponent so this turns out to be n over m times one half to the power n, got it? All right, so let's evaluate a couple of those just for fun to see what they are and then what we'll do is we'll forget about the coins and we'll do the more general case that I'll explain to you in just a minute, all right? So I'll go ahead and just, can you see that with the light on? All right, so let's try the, let's evaluate 10 coins and 4 heads, okay? So what do we need to do? We need to do binomial 10 comma 4 times one half to the power what? 10 and we could do n comma percent, all right? So if we flip a coin 10 times or flip 10 coins the chance that we'll get 4 heads is about 20%, a little over 20%. Let's try another one. Let's try the probability of getting 500 heads when we flip 1,000 coins times one half to the power 1,000. What happened here? That doesn't look right to me. Oh, right, because there's a fraction. That's right, thank you. All right, n percent, about 2.5%, okay? All right, so there are a couple of examples of using the binomial coefficient in calculation of probabilities of coin flips. But what about, let's talk about a much, much more general situation. And this general situation results in a probability distribution that's very special which is called the binomial distribution. And I'll tell you why it's special in a minute. But let me explain to you how we construct this one. Okay, so what we just did here was we considered a special case, heads and tails, okay? And we could think of this special case as a more general case where we have something that we're going to call a success which we said here we were interested in heads and then not success or failure is T, all right? Now, let's denote the probability of success as P. In this case, it's 0.5, all right? And the probability of failure has the probability of 1 minus P which also is equal to a half or 0.5, okay? Now what I want to do is consider an arbitrary situation. If you like, you can think of it as a loaded coin where instead of it, you know, fair coin has equal probability of getting heads and tails, so half and half. A loaded coin, we adjust it and just call it P. And if it's loaded in our favor, it would be P greater than 0.5, okay? But let's just leave it as P, okay? Now let's consider the case of arbitrary value of P. Now what we're going to construct is something I'm going to call P of M, capital P. And what this is going to be is the probability of M successes and we're going to assume that we have N trials, okay? Now we can look to here for inspiration. Okay? So what do we need in order to construct that probability? We need the probability of getting M successes. So that's going to be P times P times P times P up to M, all right? And then we need to multiply that by the probability of getting N minus M failures. So that's 1 minus P power N minus M. And then we need the binomial coefficient to enumerate the number of ways that we can split our list of trials into two groups, one containing N and one containing N minus M, okay? And this is what's called the binomial distribution. And it's special because it's so general, okay? A couple of possible outcomes, arbitrary probability. And as we'll see a little bit later, this distribution is the starting point for deriving very well-known and ubiquitous distributions that occur in sciences and nature in certain special cases, more restricted cases, such as the normal distribution, the Poisson distribution. Those are special cases of the binomial distribution. Okay? So let's go ahead and see how we get this thing in Mathematica and then we'll talk about an important limiting case. Any questions on this? Anybody heard of the binomial distribution? A couple of people who took statistics. Okay. All right. So now, how do you refer to this thing in Mathematica, okay? So in Mathematica, that function there can be specified as follows. It's called binomial distribution and then it has two arguments. One is the number of trials and P is the elementary probability. Now, if we actually want to generate the distribution, what we do is we put this inside another function which is called PDF and PDF stands for probability density function, okay? And then we need an additional argument here to tell us how many successes we want. So if we want to keep it general, we say M, all right? And if you enter that, you get basically the formula that I wrote down over there, okay? 1 minus P to the power N minus M times P to the M times the binomial coefficient N over M, okay? And Mathematica knows what that means. If you refer to it like this, then you have exactly that distribution. Okay? And right now it's not defined with any particular numbers. So let's have a look at what it looks like for a particular case of P and M. And the way I'm going to do this is I'm going to actually look at it for all values of M so that we get the full distribution, okay? So I'm going to define N equal 10 and P equals 0.4, excuse me, okay? And now I'm going to make a list, a table actually, that I can plot and I'll call it lowercase binomial dist equals and then I'm going to tabulate this guy as a function of M, all right? So what I want to do is table, curly bracket, M and then we can mouse this in, okay, and put a curly and then we just have to specify the range of M. So M can range from 0 successes to 10 successes. So we could say M goes from 0 to N, okay? And I'll just put a semicolon there and then we'll plot the thing. It's evaluated at discrete values of M. We're making a table of discrete numbers here so we should use list plot binomial dist, all right? So let's enter that and see what we get. And I'm going to go ahead and make the point size a little bigger, so plot style arrow point size 0.02 so we can see them better, okay? All right, so what is the interpretation of this? What we've plotted is for an outcome for which success has a 40% chance of happening on an individual trial, what we've basically plotted is what's the probability of getting a given number of outcomes as a function of a given number of successes as a function of the number of successes? Okay? So the probability of one success is for, I guess that's 0, very small, 10, very, very small. And notice that the maximum occurs here at 4. And this is not a coincidence, 4 is actually what you get if you multiply the number of trials times the elementary probability of success. So the maximum in the binomial distribution, we'll see other examples, but this shows it, occurs at that number, the number of trials times the elementary probability of success. And now that we've got this all typed in, we can see things like that very easily. Let's just change this elementary probability to 0.6 and then you see the distribution shifts. Now, another thing that I would like you to notice is that this thing looks almost like a bell curve. You see, it's got the peak here at 6, which is 0.6 times 10, and then it's got some wings on it. And for this value, these particular values of parameters, it's a little bit skewed on the lower side, whereas in the previous one it was skewed a little bit to the higher side, but it's almost a bell curve. And that's significant because there will be a limit that we'll talk about in just a minute in which this becomes the bell curve, or in other words, the Gaussian or normal distribution. And one way to appreciate that limit is to increase the number of trials. So let's go ahead and do that. So now, let's change this to 100. And I'm going to put an option in here that's useful for you to know. Sometimes your graphs get cut off. So if you put in an option called plot range arrow all, it will plot the whole thing. All right? So as expected, the maximum occurs at 60, which is 100 times 0.6. And now you can see that for n equal 100, it looks a lot more like a symmetric bell curve. So we can already appreciate the possibility that as the number of trials goes up, the binomial distribution becomes a bell curve. And that's convenient because as we'll see to some extent in this course, but you'll appreciate in other classes and especially physical chemistry and if you take probability, the bell curve, the Gaussian function, shows up a lot. And it has very nice mathematical properties that allow you to do exact calculations, whereas the binomial distribution is not so convenient. Okay. Let's go ahead and have a look at another case. So now let's go back to our coins example. A thousand coins. If it's a coin, a fair coin, we should put in 0.5. Notice the number that we get at 500 is the number we calculated before, about 0.026 or something. And notice that as we increase n, the distribution becomes more and more narrow. You see that? So there's an interesting connection to the chemistry issue here too. Let's make the number even bigger, 10,000. Let's go up to 100,000. Notice as the number goes up and up, the probability of getting anything but the value at the maximum decreases very rapidly away from that value, n times p. Now what the heck could this have to do with chemistry? Well, if I asked you what would, what happens? So suppose I have a cylinder of gas and I have a wall right in the middle of the gas and then I poke a hole into the wall and allow the gas to distribute. What's going to be the outcome of that? What's the situation at equilibrium? Well, this is like a coin flip in a way, right? If I have a really large number of coins, so suppose I have a really large number of gas molecules like Avogadro's number and I look at this plot here, I can appreciate that the most probable outcome by a lot is going to be the case where I have half heads and half tails. In other words, that half the molecules are on one side of the wall and half run the other. And that's one example that you probably heard about when you were first learning about entropy and its relation to microstates and most probable outcomes. All right, so if we think of our coins as being the gas being on one side of the container or the other, then the problem is the same. And what I'm showing you here is that if you had 100,000 gas molecules, that the probability of having 40,000 on one side and 60,000 on the other is for all intents and purposes zero. By a long shot, the most probable outcome is having half of them on each side, okay? All right, now you got it? Okay, now let me go back to the board and I'm just going to, we're not going to drive this. It's not too hard to drive, but I don't want to do it. I'll just quote for you a result that's very significant and we've already sort of seen an indication that this is true, all right? So, when the number of trials n is very large, so we'll say in the limit that n goes to infinity, but in practical terms it's much less than that, and if p times n, the position of the maximum remains finite and large, then in that limit, the binomial distribution goes to the normal distribution, which I will write for you here in some different symbols. So, it's 1 over the square root of 2 pi times sigma. I'll tell you what sigma is in a minute. And then e to the minus m minus n times, well, let me write it like this, yeah, n times p squared divided by 2 sigma squared, okay? And this is what's known as the normal or the Gaussian distribution, and it has a mean n times p, which is also equal to its maximum value. And the standard deviation is what sigma represents. Sorry about the squeaks. That's square root of np times 1 minus p, okay? So, this is the standard deviation of this distribution, and its mean occurs at n times p. All right, so now let's look at this. This is what's called the discrete Gaussian or normal distribution because m is an integer. We'll talk about the continuous one in a minute. All right, so what I want to do first is compare the binomial distribution to the normal distribution, and we'll start with a small value of n. We'll see that they're different to a certain extent, and then we'll increase n and see that they become indistinguishable. Okay, so let's go back to our original case here, which is 0.4 and 10, okay? And now what I want to do is I'm going to define the Gaussian distribution, okay? So, I'm going to put it right in here, and I'll call it g. So, g of m, a function, colon equals, and so then we have 1 over square root 2 times pi, and then times sigma, which I'll define in a minute, and I should put parentheses around the denominator to keep all these things together, and then times capital E, carat minus m minus n times p squared divided by parentheses 2 times sigma squared. And another parentheses, and I'm missing a parentheses up here, okay? So, that defines my normal distribution as written down over there, and now I'm going to introduce sigma using the same formula as on the board, all right? So, that's going to be square root of n times p times 1 minus p. And now I'm going to tabulate the Gaussian distribution, so I'll call it gaustist equals table, curly, m, and then g of m, and I'm going to do that, m goes from 0 to n, okay? And now what we'll do is we'll plot the binomial distribution and the Gaussian distribution on the same plot, so I'll put a curly here, add gaustist, and put a comma. All right, so let's see what we've got. So, which one is which? Well, we could put a legend if we wanted, but we know the blue one is the first one, okay? So, the binomial distribution is the blue points, and the Gaussian or normal distribution are the purple points, okay? So, notice that they're a bit different, but already for n equals 10, which is way far from infinity, they do look somewhat similar. What about if we crank up the number of points a little bit, number of trials? Now they're starting to look quite similar. So, already n equals 100 is getting pretty close to the limiting case for this particular value of p. What about if you crank it up to 1,000? And let's go ahead and make the point size back to normal so we can see if there's any deviations more easily. Now you see they're essentially on top of each other. Okay, so the main point here is, well, first of all, gives you a little practice with putting in the discrete Gaussian distribution, but second of all, I think we have convinced ourselves that the normal distribution, which as we will see in the course, and you'll see later, which happens to be a much more convenient function, is a great approximation to the binomial distribution for modest values of n. All right, now the next thing is, I want to tell you about the continuous Gaussian distribution. So now we're going to suppose that instead of having m as an integer, we're going to have the argument of the Gaussian is going to be a real number that's continuous. Okay? And we're going to call the variable x and the continuous Gaussian looks like this. It's very similar. And here I've just introduced the mean as angular brackets x. Okay? All right, so now we're going to see this function is actually predefined in Mathematica. So we'll see how to plot that. All right, so I'm going to go down here. And let's go ahead and redo this one again with 100. Okay, so the mean for this case n times p is 40. All right? Now I'm going to come down here and define something called mean equal 40. And then I'm going to define a plot of the continuous Gaussian function. And the way we access this is very similar to the way we access the binomial distribution. So we're going to say plot, bracket, pdf, normal distribution is what it's called in Mathematica. And then for the normal distribution we have to give mean and sigma. So in other words, the average and the standard deviation. And then I'm going to say plot it as a function of x or tabulated as a function of x. And now I have to say the range of x. And so I'll say x goes from 20 to 60. And I'll put a semicolon. And up here I'm going to define this plot as discrete equals. And I'm only going to plot the Gaussian. All right? So I do that, I enter this and I get my Gaussian. Oh, let me say plot range to 20 and 60. Oops, sorry. 20 and 60 and then I'll specify y goes from 0 to 0.1. Okay. Okay, so now we have a plot of our discrete Gaussian. And I'm going to make a plot of the continuous and then we'll show them on the same plot. And since I want to show them on the same plot, I'm going to specify the plot range here being the same. Okay, so I can just mouse all this stuff in and put it here. And finally say show discrete and continuous, enter. Okay, so now we see the blue points are the discrete Gaussian and the blue curve is the continuous. And you can see that they are coincident. It's just the continuous obviously is a nonbroken curve. And one of the points here is that if you want the continuous Gaussian distribution, then what you do is you say pdf normal distribution, you specify the mean and the standard deviation and then the variable that you want it to be a function of. Okay. All right, so to finish up all of the stuff, not quite finish up, one more little thing about the probability distributions. And there are plenty of others, but I think we'll leave, we'll stop at the Gaussian. There are others that you can define based on other limits of the binomial distribution. All right, so I just want to go back to using the symbolic forms. So I'm going to say clear n and p and then I'm going to redo something we did earlier. So this will be binomial distribution np and then comma m. And you may recall that that's going to just give us, whoops, what did I do? I spelt it wrong. That gives us the formula for the binomial distribution. But now I can actually put this into things like I can ask what's the mean of that? So if I say mean of all this stuff happened here. Oh, sorry, for the mean, we don't put the pdf thing around it. Okay, so the mean of the binomial distribution is, as we heard earlier, n times p. And if we want the standard deviation, let's put this in there. We get the formula that I wrote down over there for sigma. We can do the same thing for the Gaussian. So, well, let's just start over. Say pdf, normal distribution, bracket, mean, and sigma, comma x. Or I guess we can call it m to be consistent with our previous, or no, I guess it was x, the continuous Gaussian. And I'm going to clear mean and sigma before this because we have some numbers in there. Okay, so if we enter that, you see we get the formula that I wrote down for the continuous Gaussian, e to the minus x minus the average squared divided by 2 sigma squared with the factor out in front, 1 over square root of 2 pi sigma. And if we want the mean, what do you think we should get? Mean and standard deviation, sigma. Okay? So these are ways that you can use the symbolic form or if you put numbers in, you'll get specific values corresponding to whatever numbers that you put in. All right, now I want to show you an application of the Gaussian distribution. And this is something that you'll learn in Chem 131c. You'll talk about diffusion and transport, okay? So what I want to think about here is, let me go to the board again, so what I'm going to suppose is that I have a container, a one-dimensional container, okay? And it contains some solvent. And I'm going to call it coordinate x. And so here it will be zero. And what I'm going to suppose is that I put a one-dimensional sugar cube in there at time zero, okay? So I can imagine that all the sugar is just along x equals zero here, an infinitely narrow sugar cube. And then as the sugar cube sits there, right? The sugar, as you know, if you've ever dropped a sugar cube in your coffee, even if you don't stir it, after a while the sugar diffuses out of the cube, dissolves and diffuses, and at some point, what happens? What's the equilibrium situation? So if at time equals zero, I have all my sugar in an infinitely narrow cube at the origin. What's going to be the situation at time goes to infinity? The concentration of the sugar is going to be the same everywhere in the container, right? So that might be some value here that I'll show with a dotted line, all right? So at time equals infinity, the sugar has diffused and it fills the whole container with the constant concentration. What do you suppose the situation is in between? Well, it turns out, and this is a problem you'll learn how to solve in PCEM class, that in between, it's going to be a Gaussian distribution. So at some time, t1 greater than zero, the sugar will have, you know, started to diffuse away, but it doesn't diffuse away like a square, it obeys a Gaussian distribution if it's undergoing what's called, you know, just Brownian motion or diffusion, simple diffusion. All right? And at some later time, it'll also be a Gaussian distribution with the same area because the area under the curve is the total amount of sugar that we have, and this Gaussian distribution will broaden until it's flat. So that's one place where you find, say, a Gaussian distribution is in these diffusion problems. And it turns out that there's another problem, that has basically the same solution, and that's what we're going to, we'll actually plot this thing now. And that would be if you were to consider the position of a particle that's diffusing in a liquid, where it starts out at the origin, at time zero, what does the probability distribution, the probability of finding that particle at a given position and a given time look like? So the quantity that I want to consider is a particle diffusing in one dimension. I can ask what's the probability of finding the particle at a position x at time t, then that's given by the Gaussian distribution that has a time dependent width. If we start the particle out at zero, then that would be minus x squared over 2 sigma of t squared. And it turns out that sigma of t can be written, or again, for this Brownian motion, as 2 times what's called the diffusion constant, and then times t. So sigma of t, the width of the Gaussian, varies linearly, increases linearly with time. Okay, so now what I want to show you how to do is we're going to plot this, but we're going to plot it in such a way that we can animate it. So basically, we'll be plotting it at different time points on the same plot so that you can actually watch the thing spread out. Okay, so this will be your first introduction to what's called the animate command in Mathematica, which is kind of a fun thing to be able to do. But if you don't care about diffusion or whatever, the point is we're going to plot a Gaussian whose width varies linearly with the parameter, we'll call it time, and we'll animate it. Okay, so let's do that. Okay, so I'm going to clear sigma, I don't know if we've used d or t, but might as well clear them, and p. Okay, and now I'm going to, just to make the equation simple, I'm going to say that the diffusion constant is equal to 1, right, and now I'm going to define sigma equals square root of 2 times d times t, I guess I could leave the d out but whatever, we can change d if we want later. And then this should be, I'm sorry, this should be a function, sigma of t underscore, so it should be colon equals, all right. And now let's define p of x underscore and t underscore colon equals 1 divided by square root 2 times pi times sigma of t and parentheses here, and parentheses here, parentheses here, parentheses here, and then times e carat minus x squared divided by parentheses 2 times sigma of t and put some parentheses around it squared. Okay, now what we're going to do is we're going to make an animation of this thing showing how it varies from some early time to a longer time. All right, so I'm going to type a bunch of stuff in and then I'll explain to you the things that you may not already know. Okay, so the command is called animate, okay, and what we're going to animate is plot, a plot of p of x and t where we're going to plot that from x going from minus 10 to 10, okay, so that's our plot. Oh, wait, we want to put some options in here. All right, the options I'm going to put in are this plot range is going to be y going from 0 to 0.3, and these are things that I optimized by trial and error, I can tell you that. And then I'm going to say plot style, arrow, thick, black, this is the line that's going to be plotted, and then an option filling goes to axis. What that's going to do is just shade in the area under the curve and then I'll put some axes, labels, all right, so arrow, the first one's going to be x and the second one will be p of x and t. All right, so that's our plot. So, so far all we have is a plot command with a few options inside animate. Now what we have to do is put in a range for a variable that's going to define our animation. All right, so for us that's time, I'm going to say time t goes from 0.001, so some early time, to 100 in increments of 0.5. And then so it doesn't loop, I'll just say animation repetitions, arrow repetitions goes to 1. Oh, I don't need the curly bracket. Okay, now let's see if that works. Now I have this stuff here, back here. All right, so notice at the beginning that t equals 0.001, let's think about this. So when t is really small, that means the standard deviation of the Gaussian is really small, which means its width is really small. And we can see it's very narrow there. As time increases, the width increases, and so the Gaussian becomes broader and broader and broader, just like in the sugar cube example I explained to you. All right, and so if you push play, you're watching that happen from t goes from 0.001 to 100. So here's somewhere in between, and you can actually, you know, look at how it goes by moving the bar here. So you have this Gaussian spreading, spreading, spreading, and then at very long times, it's almost flat. And if you went to really long time, it would be flat, because the probability of finding the particle somewhere in the volume of the system, or the length of the system in this one-dimensional example at very, very long times is equal everywhere in the system, unless there's some special forces acting on it. Okay, but for us, this is mainly an opportunity to have a little widget that we can play around with to understand how the Gaussian works. Short time, narrow width, long time, broad width. You see how that evolves. And also, you got to see your first example of using the animate command. And like I say, this is something you will learn about in Chem 131c, diffusion. Okay, so now we are done with statistics, or what I want to tell you about statistics. All right, so you've learned how to analyze data in terms of mean standard deviation variants. You've learned how to make histograms. And then I introduced you to a couple of distributions that you commonly see when you histogram data. And in particular, the Gaussian comes up a lot. All right, are there any questions on what we've done with respect to the statistics? Okay, so I'm going to switch gears now. And the next thing that we're going to talk about is how to plot functions of two variables. Okay, so we learned how to make plots a couple of different ways for functions of one variable. So those are two-dimensional plots, y versus x. All right, so we've seen the plot command which applies to functions of continuous variables. And then the list plot which allows us to plot individual data points, sets of individual data points. Now what we want to do is plot a function that has two arguments. And I'll just write down an example for you. And this is an example that you will see. And some of you have probably already seen in Chem 131a. All right, okay, go back to the board. Okay, so there's a simple model problem in quantum mechanics that's usually one of the first ones that you get that you solve by hand when you learn how to solve the Schrodinger equation in Chem 131a. It's called the one-dimensional particle in a box. Does that sound familiar to any of you? And what is the model corresponding to the one-dimensional particle in a box? It's a quantum particle that can move in one dimension which we'll call x. And at x equals 0, we have a wall. And at x equals l, we have another wall. And the particle is confined between those two walls. And if the walls are infinitely high, of course, well corresponding to infinite potential energy, then this is a very easy problem to solve in quantum mechanics. And does anybody remember what you get out for the energies? The energies are discrete. Does that look familiar? H is Planck's constant and n is a quantum number. What does n range from? Okay. All right, so that tells you the ground state here, the lowest energy is n equal 1 and then it increases. Okay. Now, when you solve the Schrodinger equation in addition to getting these quantized energies, you get something else which is called the wave function which tells you the amplitude of the wave corresponding to the particle in this box when it's in a particular quantum state, n. And does anybody remember what that looks like? Well, it's a sine wave and then it has a factor out in front. Is that right? I think that's right. Okay. So then you get the one little hump for the first state and then a cycle with a node in the middle and then just higher numbers of oscillations as you go higher and higher in energy. Okay. Oops, and this should be a function of x. Okay. Now, have you guys done the two-dimensional version yet? Do you have? Okay. Good. So this is a function of one variable and if we want, we could type this thing in and plot it with Mathematica but that's kind of boring. We've already done stuff like that. But what we haven't done yet is to plot a function of two variables. All right. So if you treat the same problem except now instead of having particle motion in one dimension but say in two dimensions, say a square box, then that's also an easy problem to solve in quantum mechanics and what you get is that the wave function looks like the product of two of these, one for motion in each direction where I'll call the second direction y. Okay. Now, it's not so easy to tell what that looks like, is it? Well, now I'm going to teach you how to use Mathematica to very easily look at this for whatever values of n you want. By the way, I forgot to indicate that there's a quantum number for motion in each direction so these don't have to be the same. Okay. So let's just go ahead and do that and then we'll, I'll show you just very quickly how the 3D plot works and next time we'll discuss details of that but I wanted to show this to you because it's a nice fun way to end today. All right. So let's go ahead and define that function, that wave function for the two-dimensional particle in a box, psi x underscore y underscore colon equals, I'm going to suppose that l equals 1 so I'll just put in 2 times sine and then I'll say nx for the quantum number in the x direction. Oh, yeah, right, thank you. That would not have worked. nx times pi times x and then l is 1 so I can leave it out and then times sine ny times pi times y, okay? And now we'll look at the ground state first so I'll say n equals 1. Or nx equals 1, sorry, and ny equals 2, all right? Now the plot for making a 3D plot is called plot 3D. So we say plot psi of x and y and then we just have to say the ranges that we want so we'll say since I said l equal to 1, I'll say x goes from 0 to 1 and then y goes from 0 to 1 and now I get a plot. Actually the ground state is 1, 1 so let's look at that one first, okay? Now right now we don't know which axis is which so we'll look at how to make axes labels and things next time but I just want to show you this is kind of nice because when you're studying quantum mechanics it's kind of fun to see how these wave functions depend on the quantum numbers so suppose nx is 1 and ny is 3 then you get interesting things going on and you can move it all around like you want. You could say 3 and 3 and this helps you understand the nature of these wave functions, all right? So there's your first 3D plot and there's all kind of fun things you can do with them to make them look really cool and that's what we'll talk about next time and we'll also talk about another way of representing 3-dimensional functions which is called a contour plot, okay? So we'll see you on Monday.