 Let's go ahead and get started. So first of all, sorry I'm late. I had to deal with an unanticipated crisis. A couple of announcements. First of all, you should have received a notification that the final course evaluations are open. And I know I just asked you to do midterm evaluations about, I don't know, three weeks ago or so. And quite a few of you responded and gave me very good feedback. I'd like to ask all of you to take a few minutes to fill out these because these are the ones that actually get officially reported to my department and the dean and the like. And so these are the ones that are taken seriously. So please do take a few minutes to share your thoughts about the course and my instruction. The other announcement is that we're going to have a final exam next week, okay next Thursday, 1.30 to 3.30 p.m. here. And the style of the exam is going to be very similar to what you had on your midterm. Okay, so it's going to be solving problems in Mathematica, ones that are similar to homework problems that you've done or examples that we've done in class. The final will be cumulative, meaning it will cover all of the material that we've seen this quarter. And I will design the exam so that it will take approximately two hours and you'll turn it in a drop box as before. So just to remind you, in my opinion the most effective way to study for the exam, I don't think it will require a huge amount of studying time, is primarily to just go back through all the notes from the course and remind yourself what it is that we've done because always the best way to approach using a tool like Mathematica is to start with an example that does something similar to what you want to do. And there are many, many examples in the notes. And then the other thing that you might benefit from doing is to look over your prior homework problems and your midterm exam and if you find something that you didn't understand very well or know how to do, you might want to see if you can figure out what went wrong and how you might do it correctly next time. Okay, so that's the final. Does anyone have any questions about the final exam? Okay, well then let's go ahead and continue with the material. So yesterday we finished up quickly with an example of doing nonlinear curve fitting using the find fit command. Okay, and just to remind you that when we do curve fitting we have a choice of two different strategies. The first is linear and that's applicable when the fitting function is linear in the fit parameters. It doesn't have to be linear in the independent variable but it should be linear in the fit parameters. And when that's the case you have the very nice quality that that problem, solving that problem gives you the optimal fit, a unique optimal fit. Okay, and that's to be distinguished from the nonlinear case where the functional form has a nonlinear dependence on the fit parameters. And in that case there are in principle multiple solutions to the problem. In order to guide whatever program you're using that's doing the fitting toward the solution that makes sense for your problem it's very useful that you have enough understanding of your data in order to be able to put in reasonable guesses for the fitting parameters. Okay, so we saw one example of that or two examples of that yesterday, one example. Did we do the bi-exponential fit yesterday? I don't remember. Does anybody remember that? Did we only do the Gaussian fit yesterday? Okay, so we'll do another nonlinear fit with fine fit today at the beginning. All right, so let me go ahead and find that example. It seemed to have misplaced it. Well anyway, we can cook one up. So just to remind you we talked about this a little bit yesterday. What we're going to do is we're going to create a data set that will fit with this form here. Okay, so this is what's called a bi-exponential fit. It contains two exponential decays. The time constant of the first decay is called tau 1. Out of the second is called tau 2. And then these are the weights with which each of these exponentials contribute to this function. Okay, so there's four parameters in this function that we will determine by fitting. There's C1, C2, tau 1, and tau 2. All right? And just to reiterate from yesterday, this function is nonlinear in the parameters tau 1 and tau 2. So we need to use a nonlinear fitting algorithm. All right. I'd really like to find notes on that. Okay, here we go. All right. So the first thing I'm going to do is just define that function. All right? So we'll start out with C equals C1 times exp minus t divided by tau 1 plus C2 times exp minus t over tau 2. Okay? So this is the actual functional form that we're going to use in the find fit command to do a fit. Now, in order to do the fit, we need some data. And normally we might measure the data and read it in. But right now, what we'll do is we'll just create an artificial data set that has this functional form, but we'll add in some noise in order to make the fit a little more interesting. Okay? So for that, I'm going to put in some specific parameters and call the result F. So F is going to be C evaluated at using replacement rules. C1 equals 0.3. C2 equals 0.7. Tau 1 equals 0.1. And tau 2 equals 5.0. Okay? And now what I'm going to do is just tabulate that function plus some noise as a function of t. So I'm going to create a table data which is going to have t and then F. And then 2F, I'm going to add 0.01 times the square root of t times a random number, so random real between minus 1 and 1. And I'm going to put a curly bracket and then tabulate this for t goes from 0 to 10 in intervals of 0.05. Okay? And then finally, we'll have a look at the data that we've created, so list plot data. Okay? So let's have a look at this. So you can see that this 0.01 times square root of t times the random number adds in some noise that increases as time goes on. All right? And there are a couple of other things that you can notice here. Okay? So first of all, there's a, at the early times, there's this very sharp decrease. Okay? And that's coming from the first term that has the short time constant, 0.1. All right? And then the second decay kicks in and sort of dominates the time dependence of the data here with a longer time constant, which we put in was 5. You can think of this as being a time axis. So if it was seconds, this is a decay occurring on 0.1 seconds and this one on 5 seconds. And you can kind of see where this one would start if it was the only term by sort of tracing it back to the origin and you get 0.7. And that's because we put in a coefficient of 0.7 for the second decay. And you can see that the first one drops quickly over a short time, you know, less than one second. And the second one kicks in at around 0.7. This one then you can see has an amplitude of around 0.3 because together they add up to 1. Okay? When you put t equals 0 into that function, you get 1. All right? So fitting these data with a single exponential would probably not work very well, right? Because you would either miss the first fast, well, you would miss parts of both decay, really. Okay? So we're going to go ahead and attempt to fit that. Well, we will fit that using fine fit, which is nonlinear, these squares fitting. Okay? Now a little later, we're going to plot the data and the fit together. So I'm going to label this plot, data plot, okay? It came out a little bit different because I got different random numbers when I re-entered the command. Okay? Now, so in order to use fine fit, we specify the data we want to fit and then the functional form, which is C. And then we have to put in a list of guesses for our parameters. So C1, if we didn't know the parameters that we put in, so we know what the answer should be more or less because we put in the answers. But if we didn't know, we would be able to see that that first decay seems to be dropping strongly over about 0.3 in the amplitude. Okay? And so we might put in 0.3. And the second one traces back roughly to 0.7, so we can put in 0.7. Oops, so that should be C2, 0.7. And then for tau 1, we see that the function is sharply decaying over, you know, about half a second or whatever the time units are. So let's go ahead and put in 0.5. And then for the second one, we see that it's decaying over a few seconds, so let's go ahead and put in just for fun, so tau 2, let's just say 10. Okay? So there's all of our initial guesses. And then we need to specify what's the independent variable, so that's T. All right? So let's see what we got. So just to remind you that when you use fine fit, it's a little different from when you use fit. When you use fit, you get out of basically an equation that you can use directly. And when you use fine fit, you get out the fitting parameters as a list of replacement rules. So you actually have to evaluate your fitting function with those parameters if you want to see what the curve looks like. Now, we notice that the optimal parameters here are very close to the ones that we put in. They won't be exact because we put in some noise. So that C1 comes out to be 0.3, C2, 0.7, tau 1, 0.1, and tau 2 is around 5. Okay? So we can anticipate that the fit will be pretty good. So let's see how good. Let's go ahead and reissue this command and store it in a variable that we can use. Call it fit data. And now I'm going to generate a plot of the fit, which I'll call fit plot equals plot. We're plotting our function C, and we're going to put in the fit parameters that we determined and put in the variable fit data. Okay? So that's how we use the fitting parameters. And then we'll plot over the same range as the data. So T goes from 0 to 10. All right? So there's the fitting function. And now to plot the data and the function together, we can use the show command. So data plot and fit plot. And as anticipated, we see that the curve goes nicely through the data. Just for fun, let's try something else to make the point that the bi-exponential fit here looks better than the single. So let's try another case here where I'm going to define, let's go up here and grab the first part of our function. Just grab a single exponential. And I'll assign that to the function g. Okay? And now let's grab this. Okay? And now we're going to fit g with using g. So we don't need tau, or C2, and we don't need tau 2. And I'm going to put in 1 for C1. And I'll put in 10 for tau 1. Okay? So let's go ahead and enter that. And you see we get numbers out. There's no problem doing the fit. But let's see how it looks. So let's evaluate now g using the fit data. And now let's put those two plots on top, the data along with the fit. So show data plot and fit plot. And you see that you get an okay representation of the second decay, which is where most of the data are. So that this part of the function, of the data waits more in determining the shape of the fit. But already you can see as you start to get down around 1 there's some pretty significant systematic deviation and you completely missed the initial decay. Okay? So here you can see an example of, you know, we're fitting. And then just looking at the quality of the fit by i can help to inform you as to whether or not you've picked an appropriate model to model your data. Okay? So hopefully by just doing an eyeball comparison of this plot here, to this plot here, you can see that the bi-exponential fit is immensely superior in terms of its ability to match these data. Okay? So that's part of the art of curve fitting is to choose an appropriate function. And that's often dictated by some assumption about the origin of the data and what its functional form should be. Okay? So there's a second example using fine fit and that's the function that you're going to use to do the homework problem with fitting this week. Does anybody have any questions about using fine fit? Yes? You know, I kind of eyeballed it. So let's look at our data here. All right? So what I see in the data is, first of all, it's overall decaying in time. Okay? And another assumption that's not obvious unless maybe you have some experience or you had something in mind when you were creating the data is that I purposely cooked up this data to sort of be basically exponentially decaying in time. And this is a common thing that you find, for example, anytime there's a first order process determining the loss of a species or a signal, you'll get an exponential decay. And in some cases there's more than one such process acting at the same time and they're separated in time scales. So this bi-exponential or multi-exponential behavior is something that's often observed and often used as a basis for fitting decaying data in time. Now for this particular case, the way I sort of eyeballed the parameters was that, you know, I know that we know from our study of kinetics that the point at which the, say, the concentration of a reactant decays to one-half of its original value, that's what we call the half-life, which is related to, in kinetics, the rate constant, which is one over the tau that we use here. So I sort of eyeball and say, well, where does this sort of get to be half its initial value? And so if I was to trace this down to, say, 0.5, I would say, well, that's on the order of a second or so. All right, so I just put in 1 or 0.5, something that's in the ballpark. I wouldn't want to put in 10 for tau 1, although it might work, we can try that. Okay, and I noticed that this guy decays and then at some point the second exponential kicks in and I noticed that about the point where I would expect the second one to hit the origin if there wasn't the first one, that's around 0.7, so I figure since both of them add up to 1, the amplitude of the initial fast decay is going to be around 0.3. Okay, so that's dictated by what we choose as the fitting function. Okay, so as soon as I type this in and assuming that I already know that T is the independent variable, then all the other things remain to be determined by the fit because that means I need to determine C1, C2, tau 1 and tau 2, all right? And knowing a little bit about what exponential decays look like allows me then to guess those based on what the data looks like. So I didn't talk about tau 2. So if I say, well, here's this first exponential and then it looks to me like there's a second one kicking in that looks like that, if I look at where this guy decays to half of its initial value, which I guess would be around 0.35 or so, that's on the order of a few seconds. So if I put in 10 for tau 2, that's going to be in the right ballpark. But just for fun, I haven't tried this, but let's just see whether or not we get significantly different values if we make significantly poorer guesses. Let's try it. We have it all typed in. So suppose we go here to this one where we did the bi-exponential fit and what if I put in 1 for both of them, for C1? What if I put in 10 for tau 1 and 100 for tau 2? Okay, so these are quite far away from what we know or the real parameters we should get. Notice that you get completely bogus results. Yeah? You see this? We've gotten completely ridiculous answers. This is what I alluded to already. The problem of nonlinear curve fitting is not unique. There are places in parameter space that give reasonable behavior in terms of the algorithm that's used to sort of minimize the deviations between the data and the fit function. And so you do have to be roughly in the right ballpark. So let's try a more reasonable but still somewhat different and see if we get to the right result. So let's go back to, let's try leaving these at 1, which we know can't be true because that would mean that the function would be 2 at the origin, which it's not. But let's try playing with these a little bit. Suppose we put in 2 here and 20 here. Okay? So there we actually managed to fall back into the basin of the parameter space where we get reasonable results. And those are exactly the same numbers we got before. Okay? So that's a very useful thing to keep in mind when doing nonlinear curve fitting that you need to know something about your parameters. Well, first of all, you need to know whether or not the things you get out are bogus. If they are, then you know that you need to shore up your guess a little bit. So you start out with a more reasonable guess. It doesn't have to be super accurate as this last example shows, but it needs to be in the right ballpark, whatever that means. Okay? So that is one issue. And so when you're doing your homework, right, the A parameter is the height of the function. A times T2 is the height of the function. T2 is related to the width of the function. Okay? And the mean frequency, new zero, is where the thing has a peak. All right? So you can just kind of eyeball and put in rough estimates. You'll get a good answer. Okay? So you have to be careful with curve fitting, nonlinear curve fitting. Another thing, we're not going to do it here, but most curve fitting programs allow you to impose constraints on your parameters. And sometimes those constraints are dictated by physical reality or just by the behavior of your data. So for example, one simple case of a constraint in the present case would be that we see that our data, well, the function should be equal to one at time zero. So that means that C1 plus C2 have to be equal to one. Okay? So we could impose that in principle as a constraint in helping to find the right set of parameters. Another thing is that we know that tau 1 and tau 2 should be positive numbers. Okay? They're meant to be time scales, relaxation times. So you can put constraints in that say those have to be positive numbers and you could say, well, tau 2 should be greater than tau 1. That would be another constraint that you could potentially put in with nonlinear curve fitting. Yeah? Since in your original equation, the first line you already defined where C1, C2, tau 1 and tau 2 are going, did you just make that your guess? Well, yes, of course. So the question is about the fact that I actually put these numbers in at the start. I put those in to create this and then I suppose that I completely forgot that I actually made the data myself. Okay? So in reality, like in the homework problem you're going to do, you're going to just read in some data. You're going to look at it and then you're going to have to put in the guesses. But it's true that in this case we cooked up the data so we knew the right answer and we could have put those numbers in immediately. But in general, you wouldn't do that. This is just to have some data to play with. Okay? Any other questions about that? All right. Okay. So now we have to do both linear and nonlinear curve fitting using Mathematica. Using fit and find fit. Now, the next thing that I want to show you is this is something that I just want you to be aware of. Okay? Notice that when we do find fit, we get out parameters of our fitting function and no other information. And we can use those parameters to do what we did which is we can plot our fitted function over the data and this is always a very good thing to do and we see just by eyeballing it that it looks like it's a good fit. Okay? But often when you're doing fitting you want to have some quantitative measures of the goodness of fit and error estimates for the parameters that you determined. Okay? So the next thing I'm going to show you is souped up versions of fit and find fit that allow you to have access to a wealth of information concerning the details of the fit. Okay? And to be perfectly honest with you, most of the information that's available, I don't even know what it is. You have to really, you know, have studied a lot of statistics to know in gory detail what all of these things are, but there's a small number of them that are actually useful and intuitive and so we're going to look at some of those now and the point is here that I want you to be aware of the fact that if you ever wanted to do some sophisticated statistical analysis of a fit that Mathematica as well as other programs that do this sort of thing give you access to these parameters. All right? So we're going to go back to the linear case and once again we're going to create some fake data that we're going to use just as an example so that we can do a fit and see what kind of information is available to us. All right. So the first thing I'm going to do is I'm going to generate a line with noise. All right? And I'm going to do that with the table command and I'll stuff it into a matrix called data. Let's see. Table. And I'm going to use I as my x-axis if you like and then 2 times I plus 3. Okay? So what this would give me if I evaluated it for a series of I is a straight line with slope 2 and intercept 3. All right? Now to make the fitting a little more interesting I'm going to add to that some random numbers. So plus random real and the random numbers are going to be uniformly distributed between minus 2 and 2 and then I'm going to evaluate this for I goes from 1 to 20 and I'm missing a curly bracket here. Okay? And now I'm going to generate a plot that I'll call data plot which is list plot of data. All right? So let's have a look. Okay? So there's a line that has a slope close to 2 and intercept close to 3 with some noise in it. All right? And we did something similar before and we used fit to determine the best linear least squares fit to those data. All right? In fact, we can do that again just for practice. All right? So let's go ahead and fit that. Let's say fit data equals fit data and then you may recall with fit we specify the functional form in terms of the coefficients or the orders of the term. So the first term if we want to fit to a line is going to be proportional to 1 and then the second term is going to be proportional to x. Okay? And then our independent variable is x. All right? So if I do that I get the equation for a line and we see that because of the noise we don't get back exactly 2x plus 3. We get roughly 2x plus about 2.4. Okay? And we can generate a plot of that. Fit plot equals plot fit data. x goes from 0 or 1 to 20 and then show data plot and fit plot. Okay? So this is what we've done previously and then we see that indeed the line that fit returned to us actually seems to go through the data pretty well. All right? So far we didn't learn anything new. Now there's another version of fit that does exactly the same thing but it returns different information. Well, it returns the same information but a lot of additional information also. And what that command is called is linear model fit. Okay? And it has exactly the same arguments as fit. So we can just copy these and put them in. All right? And I'm going to for future reference I'm going to assign this to a variable I'll call linmod for linear model. All right? Now if we do that notice what we get. We don't just get a simple equation for a line. We get some funny looking object here called fitted model. Okay? And this object contains lots of information. All right? So let's have a look at a couple of these things. So first of all if you wanted to actually plot the resulting model, the equation for the line how do you access that? Well there's a couple of ways and so I'll show you a couple of ways. So the first one is you can say plot linmod of X. All right? So I said X was my independent variable I can treat linmod as a function of X and plot that. So for example from X goes from 1 to 20. All right? If I do that I get the line. Okay? And it's exactly the same line we had before. It's just we access it a little bit differently if we've gotten it from linear model fit. Okay? Another way to do it is we could generate the equation by saying normal of linmod. Notice if I do that it pulls the equation out and gives it to me as if I had just done fit. Okay? So these are just two ways that you can have access to the actual model itself. So I could plot here by saying plot normal of linmod and I get the same line. Okay? So that's two ways to get access to the model itself. There's a whole bunch of other stuff. And to see what all is there you can say tell me the properties of linmod, the available properties. And if I do that I get a long list of things that I have access to. Anyone here remind me anyone in here taking a statistics class? Any of this look familiar to you? Parameter errors, confidence intervals, things like that. Some of those things do. And then some of them probably don't do, right? So there's a lot of information here, maybe TMI actually. Okay? So what I wanted to do is just show you how you access a couple of these in case you ever would like to do that. Because some of them are actually fairly easy to understand without actually having a formal training in what they mean. Okay? So one of the things is what's called the residuals. All right? So in this plot up here we have this line and the data is scattered around the line. And what the residual is at a given point is it's the difference between the actual value of the data at that point and the fitted function. Okay? So by plotting the residuals you can focus in on the scatter of your data around the plot. Okay? So if we wanted to access those residuals the way we do that is we look in the table here and we find this thing fit residuals. And so we refer to that in our linear model. So for example if I say show me here lin mod of fit residuals what I'll get is a list of numbers that tells me for each point in my data how far that data point is from the line. And the list of numbers is not so interesting but if you want to make a list plot of that then you can see them. Okay? So this is now a plot of the values of the differences between our data and the straight line fit at each point. So you can see that there's a fair amount of scatter there. And it looks to be randomly distributed. Of course it is because we made it randomly distributed. But if there was a systematic behavior in the residuals it might make you question whether or not you chose the right function to do the fitting. Okay? Another thing that you can look at. So let's just type in lin mod that might be interesting. Is the parameter table. So let's have a look. Okay? So what is in here? Well what we get here is this is the parameter of our term that was proportional to one. So that would be the intercept for our straight line plot that we determined by fitting. And then this one is the slope. Okay? So we already knew this. But now look we get additional information which is we get some error estimates. So if you were reporting the slope you might say this slope is equal to 2.4 plus or minus 0.5. Alright? So that will tell you it's a measure of how certain you are about that fit. And likewise the intercept might be 2.0 plus or minus, well 2.03 say plus or minus 0.04 or something like that. Okay? And then these values over here which our statistician probably has seen at some point or another, these are things that are useful if you want to do hypothesis testing. Well-known statistics, t-statistics and p-values that help to advise hypothesis testing. Okay? Another parameter that's very, very useful. If you wanted to have a single number that sort of summarized the goodness of your fit there's a parameter called r squared. Okay? And this is the one that's most often reported when people do linear least squares fit. What r squared is, it's a measure of what fraction of the variability in your data has been captured by the model. Okay? And so what that means is that r squared goes between 0 and 1 and the closer it is to 1 the better because that means that all the scatter in your data is being very well captured by the fit that you did. Okay? So let's have a look at that. First of all if you look at this plot here you see that in fact that line does seem to fit pretty well. So let's see to what extent that's reflected in the r squared. How close to 1 is the r squared? So to get that we say lin mod bracket and then put in r squared. And we get a value of 0.99 which tells us that by this criterion the fit is very good. Okay? So there are many, many, many things in here. Many things. I'll show you one more and then we'll see how these parameters vary when we put in more noise and make the fit a little less reliable. Okay? So this one more is what's called the single prediction confidence intervals. Okay? And I think the easiest way to talk about that is to actually have a look at what they look like. Essentially what these are is it's a range around your fitted function over which your model predicts the behavior of the data with a certain level of confidence. And that confidence level is something that you can specify but by default it's taken to be 95% which is a common confidence interval. Okay? So what I'm going to do now is I'm going to define ci for confidence intervals equals lin mod and we type in single prediction confidence intervals. Okay? If I enter that, notice I get a list of pairs of numbers. So what this is, for the first point x in our data, this tells us sort of the lower and upper values of the confidence interval and then the second point, et cetera, et cetera. So let's have a look at this. Let's plot these. Okay, now in order to plot these we want to plot them as a function of x. So we have to do a little massaging. So part of the whole point of showing you this is to just show you how to do a little bit of data massaging. Okay? So the first thing I'm going to do is I'm going to create a list that contains all the x values from my data. Okay? So I'm going to say x values equals and there's a new command, map first data. Okay? Now what this is going to do and I have to put first with a capital F. What this is going to do is it's just going to take the first column of data and put it into x values. All right? So that's just 1, 2, 3, 4, 5 up to 20. It's the same as if we would have said data bracket, bracket all comma 1. Okay? So it's just another way of doing that. Okay? Now next thing I'm going to do is define a list of pairs of numbers where the first one is going to be our x value and the second one is going to be the lower value of the confidence interval. In other words, all of these first elements and then we'll make another set that's going to have x plus the second value. Okay? So the first one I'm going to do is called lower CI for lower confidence interval equals transpose bracket curly x values comma and then map first of CI. Okay? So if we do that, we get our x values. So this is just a list of points, right? With the x values and then the lower values, lower confidence interval values. And we can do the same thing for the second. We just call this upper. And instead of using the last first column of the CI, I'll use the last column. Okay? So now I get the x values 1, 2, 3, 4 with the upper confidence interval values. Now the reason I'm doing this is because I want to plot these guys along with the data and the fit function so you can see what we mean here when we're looking at these confidence intervals. All right? So to do that, I'm going to make plots of these guys. I'm going to draw them with dotted lines or dashed lines. Okay? So this is going to be called CI plot equals list line plot bracket lower CI and upper CI. Okay? So that's going to plot these two data sets that we just created and then plot style arrow curly. So the first one I'm going to say is black and dashed and the same for the second, black and dashed. Okay? And now we'll show CI plot along with data plot and along with what did we call it up here? I guess we didn't call this anything, did we? Let's go back up here to our first plot. Oh, no, we did. We called it fit plot. Let's go ahead and call it fit plot. Okay? All right. So that's what that looks like. So our data is shown as dots. The fit is shown as the blue line and then these dashed lines are the lower and upper confidence intervals and you see they make a band around the data. And the way to interpret this band is that this is the region around the line in which some fraction of the data will fall and that fraction depends on what you specify as the confidence interval. Okay? So we said 95 percent and it looks like all of our points actually fit in those, in between those bands. Okay? So there's just a quickie introduction into another set of information, another type of information that's accessible to you if you use linear model fit. Now what I want to do next is basically redo what we just did but we'll put in more noise so you can see how these various things numbers change and maybe get a little bit of an appreciation for what they mean. Okay? So let's go all the way back to the beginning here and grab this. And what I'm going to do now is just crank up the noise level. So let's put in 10. Okay? So we have something that looks like it might be a line maybe but it has a lot of scatter in it. So I want to see how some of these parameters that we've looked at actually have changed. All right? So let's go all the way back up here and do a linear model fit. Okay? And then let's do fit plot equals plot lin mod of x. X goes from 1 to 20. And I'm just going to put a semicolon there so we don't have to look at it. All right? And then we'll say show data plot and fit plot. Okay? So you see we were able to find a line. Linear least squares will always be able to give you a line that goes through your data. And the question is how good or bad is this fit compared to the one we did previously? Oh, by the way, let's just have a look here at the fit parameters. Let's go back up here and reissue this command. And notice these numbers. Okay? 0.5, 0.04. Because we want to see how those change for the second version where the fit doesn't look as good. All right? Let's enter that. Okay? So first off you see that the intercept is quite different. It slopes a bit different. And notice that the errors in these parameters has increased substantially. What about the R squared? Do you think it's going to be lower than what we found before, which was 0.99? Probably. How much lower? Let's find out. Goes all the way down to 0.86. Usually below 0.9 is considered to be pretty poor. And we can kind of see just from looking at this that it's pretty sketchy. All right? So let's see. Let's do one more thing. We can go ahead and, well, we don't need to do the confidence intervals. You can kind of see where they're going to be. It's going to be a much wider band around which the model could vary and still trace out those data. Okay? So hopefully that helps you to appreciate a little bit, anyway. First of all, the utility of having available some of these other things that tell us about the quality of our fit, but also that, you know, they make sense even if you haven't had a formal training in them. You can see that they can be useful for quantifying how well you've described your data by the model that you're fitting. Okay? So are there any questions on this one? Linear model fit? All right. So if not, then what I'll do is we'll finish up here by doing the nonlinear version of that which is called nonlinear model fit and see some of the same types of parameters. And in order to generate some data for fitting, we'll do a noisy Gaussian just like we did yesterday. Okay? So we'll redo some things that we did yesterday. Define a Gaussian. It's going to be a times exp minus b times x minus c quantity squared. We'll generate some data by evaluating the Gaussian for a particular set of parameters. So f will be equal to Gaussian slash dot a arrow to b arrow 0.25 and c arrow 5. Okay? And to generate some data, we'll use the table command, table x and then f. And we'll add to it some noise like we did yesterday which is going to be plus 0.2 times the square root of f times a random number, random real going from minus 1 to 1. Okay? And then we'll evaluate it for x goes from 0 to 10 at intervals of 0.1. Okay? So let's enter that and then we can do data plot. Oops. It's equal to list plot of data. So we can see what our data looks like. Oops. Did I use c? Let's just clear some things here just in case. a, b, c, x. Okay. All right. So it looks similar to what we generated yesterday. And we used this as an example for using fine fit. Okay? All right. So now instead of fine fit, we'll use the souped up version called nonlinear model fit. We'll define nlm fit equals nonlinear model fit data to Gaussian. All right? And because it's a nonlinear fit, we do have to put in guesses for the parameters. Okay? So the a parameter will be the height. So we can put in a, 2, b. 1 over the width squared, roughly the width squared is b. So we see the width is roughly around 1. So 1 over 2 times the width squared. So we can put in 0.5 for b. And then c is the position of the peak. The mean of the Gaussian. So that's around 5. And then x is our independent variable. Okay? So if we enter that, we get another fitted model. And we can interpret it in the same way that we did for linear model fit. All right? So for example, we can say fit plot equals plot nlm fit of x. X goes from 0 to 10, put a semicolon and then show data plot and fit plot. All right? So we see by eyeball that the fit looks reasonable. Okay? And as we did with linear model fit, we can have access to more information now. Okay? So if we want the table of the best fit parameters, you can say give me the nlm fit bracket best fit parameters. So we see a is around 2 as we anticipated. b is 0.25 and c is close to 5. If we want some information on the statistics of the fitting parameters, say nlm fit bracket parameter table. Okay? So we get the same values that we saw previously including some estimates of the standard error of our fitting parameters, a, b, and c. All right? Now we can also do the confidence intervals. You want to see what those look like for the Gaussian? Grab this and back up to lower ci. Oh, we have to actually define the ci. Where did we put those? Here. And we have to change the name here. So it's nlm mod. Oh, sorry. No, nlm fit is what we called it. Okay? And then we need to grab our upper ci. And then we will grab the plot. So those two lines. I think that should do it. Okay? So there you have it. Notice that the, notice how these confidence intervals sort of track the variability of the data. Kind of interesting to stare at. Okay? So that's the analogous quantities for the nonlinear fit. Let's just do one more thing here. And that is let's go ahead and put in a little more noise in and see how things change. So we can do that by, multiply this guy by 1. So just take out 0.2. It's a lot uglier. Notice we've got a quite different curve, even though the underlying function was similar. It just has a lot more noise in it. Keep an eye on these parameters. See how much they're going to change? Okay? And especially these, the errors, notice they go up quite a lot. And then finally, these bands should get a lot wider. And they do. Okay? So there you have a couple of examples of using linear model fit, nonlinear model fit, to do what we've already done with fit and fine fit, but to also sort of give you access to a variety of useful parameters for quantifying how good is your fit? Testing hypotheses, things like that, if you ever need to do that. All right? So that's just a very, very brief introduction into some of the many statistical resources you have at your fingertips if you use Mathematica to do your plotting. All right. So let's see what we have left here. So we're just getting close to the end here. And we don't quite have enough time to do the next thing. I want to show you one more thing involving plotting. And that has to do with, suppose you make a series of measurements and you've reproduced or repeated those measurements. And for each point that you've measured, you have some calculation of the amount of uncertainty in your data. Okay? So for example, you may just be using the standard deviation of the repeated measurements. All right? And you'd like to include those uncertainties in your plot. In other words, you'd like to include what we call error bars that indicate how much each of your measurements is uncertain. Okay? Then you can do that. And that's what we'll do next time. We'll take some data that corresponds to a series of repeated measurements. We'll calculate for each point. It's going to be data as a function of time. So for each time point, we'll calculate an uncertainty by using standard deviation. And then we'll include the error bars in our plot of the average values of the data. And then we'll do a couple of more little miscellaneous things that just to show you some additional things that are kind of fun that you can do with Mathematica. And those are specifically being able to access databases directly from Mathematica. And then the final thing is I'll show you a utility that's free from Wolfram called the Mathematica player that won't allow you to create notebooks but will allow you to actually execute them if you have them available or steal them from somewhere on the web. And that's kind of a nice thing to have access to for a variety of reasons. One is that there's a lot of very illustrative and educational notebooks out there that you can look at without having to buy Mathematica. And the other one is that it's a nice way to be able to look at the contents of a notebook to get ideas about how to do more sophisticated things than we've learned in this class. All right, so that's what we'll do to finish up the course next time. And so between now and then, please take a few minutes to fill out the evaluations and we'll see you on Thursday.