 So, yeah, so today we will basically summarize what we have been doing in the last 40 odd classes. I will start by before we recollect, collectively recollect, right, what we have done so far. I will just sort of make an observation on what we have been doing so far. So, though it is supposed to be computational fluid dynamics, introduction to computational fluid dynamics, you realize that basically what we have done is we have looked at ways by which we can solve differential equations, right, and variational problems, how to represent them on the computer and how to get solutions of a numerical nature. So, I want to put that in some kind of a context. In the context, in the sense now I want to make it a very broad context. So far, we have been acting as though it is fluid dynamics, but if you think about it, I have not really mentioned fluids that much. Maybe I have mentioned a little gas dynamics along the way, right, I mean, but we have not really spent, we have not really, the equations are not like the full flow, full fledged fluid equations or whatever. We have looked at Euler equations and so on, okay, right. So, just to put that in context, I want to draw, graphically show you something about the nature of how these things work, okay, just a big picture, just a big picture. So, on the one hand, okay, I have, we will, I will maybe even color this, I will color it, I will fill it inside. On the one hand, that picture is supposed to represent reality as it is, right, what we are trying to model as it is. And if I am able to reproduce it, but if I am not able to reproduce it exactly, it does not matter, okay. So, I will say that makes sense. And from here to here, I am going to write processes, okay. This is perception, this is what we see. It is perception, that is what we see, whether you use a microscope to see or whether you use, right, whether you just look. When we talked about, when we talked about driving the governing equations in one of the classes, I sort of may have waved my hands around and said molecules, what molecules, right, I do not, I do not see the molecules. Am I making sense? We may be able to infer that they are there, but a phenomenological attitude would be continue, it is continuous media around, okay. So, that would be perception, this is what we, this is what we see. And the trouble with perception is there is a lot of detail, I mean benches, carpeting, I mean there is a lot of detail, lot of fine grain detail, we cannot handle that much detail, okay. So, even from what we, so there are things that we do not see, that there are things that we just cannot see, I mean we are unlikely to see. So, locations of every molecule at every point at a given instant, you know, it is not going to work. So, we come up with, we throw away what we think is the detail, we throw away the details, we throw away the details and come out with a mathematical model. And for a mathematical model, of course, what am I going to choose is the geometrical shape as close to the circle as I can get, right. So, we come up with a mathematical model, okay. So, this is a big picture of what we are doing. As the reality, this is what we see, cannot handle the detail mathematical model. And very often, as in the case of the Navier-Stokes equations and so on, cannot handle the mathematical model, okay. So, then we go, where do we go? Then we go to the computer, okay. So, this is deliberately of course, it is clearly geometrically supposed to represent the fact that this has an infinite number of sides that has only a finite number of sides. So, one of the things that we have looked at is, for instance, we ask the question of consistency. So, if the number of sides keep increasing, does this become the circle, right? Does the solution go, does the model go? There are questions that we have asked. So, there is an issue of verifying that this is indeed this. You have written a program, right. So, actually I just say computer model. This itself you can add levels. There is the chalk dust that we write, right, of the finite difference scheme or whatever. Then there is the actual program, which is representation of that algorithm, actual representation of the algorithm on that particular computer. And then there is the issue of whether does that represent the algorithm that you set out to implement? Does this become this? You understand? Or does it, you may want to verify whether that is correct. So, you can verify whether this is correct. This is called, very often this is called validation, okay, right. From here to here, of course, we actually called it consistency. You understand what I am saying? So, this is sort of the big picture. This is what we have. This is what we see. This is what we are hoping we can do. This is a mathematical language. This is the abstraction from this. Abstract. Abstract basically means to extract, right. You abstract out the essentials and throw away the non-essential. And if by chance you throw away something that is an essential, that is a different story. But you throw away, you abstract out the essentials and you do not worry because that is how science works. Science works on generality. Science works on things that are common. The particular is of no interest to science. We want to make a general statement, right. We are not, the particular is of no interest to us, right. So, we abstract out the general. And very often we find that this, we cannot solve this, right. This is where we are. So, actual fluid, what we see Navier-Stokes equation finite difference method. That is one way to look at it. Is that fine? Okay. So, as I said, I just wanted to talk about this so that we had this, we have this general context in which we can look at the course and possibly some of the material that you have seen elsewhere. So, why do not we see if we are able to do a review of the material that we have covered so far? Okay, fine. Where did we start? We have about 40 odd minutes in which we can cover the one minute per day. It is essentially where here, right. So, where did we start? What is the, what is the start? Yeah, so we had, we basically said that we started off with the idea that the problems that we are solving have solutions which are functions. Remember, the problems that we are solving have solutions which are functions. So, we need to be able to represent functions on the computer, right. So, we sort of realize that oh, we have to be able to represent things on the computer. So, towards that end, we basically said that let us see how things are represented on the computer, okay. That is where we, that is where we, so that was the motivation. All the equations that we solved typically end up being, the solutions end up being what we mean when we solve. It is not like x squared equals 2, therefore x is square root of 2, right. So, you get a nice number whereas what we have are more like functional equations, equations for which the solutions are functions. And we basically said oh, if I want to solve that on a computer, then I need to be able to represent the solution on the computer. I need to be able to represent the problem on the computer. So, how does one represent things on the computer, right. And we started by saying how do we represent? Well, integers. Computers are good at representing integers. How do you represent the real line? And computers are not good at representing the real line. That is what we found, right. At the real line, we found that the computers are not good at representing the real line, right. So, the real line was the first time that we encountered a form of representation error called floating point, round off error, right. So, of course, there is fixed point and floating point. I will just mention that since I inadvertently said floating point. So, you have fixed point representation and floating point representation. Fixed point representation is just like dealing with integers. It is just like having an integer and assuming the decimal point is at set point. Floating point, of course, the decimal point in theory moves because you have an exponent, right. So, the representation errors, one of the things that you have to take from this course, the representation error in numbers has a special name. It is called round off error. So, anytime you are basically asked, so what is the error that you get? And the round off error, what is the round off error? Round off error is the difference between the number and its representation on that computer. So, that computer could be a piece of paper. You decide that you are going to represent a number using 3 decimal places. Then you are the computer, right. And then that is 3 decimal places, right, is what you are going to have. And therefore, that would be the difference. Now, if you read, I smile, but if you read old books and they say the computer should be careful, they really mean you, right. So, you go back and get a numerical analysis book from 1940s or whatever and they are saying the computer should be careful. They really do mean you, you as the individual doing the computing, okay, fine, right. So, that is as far as the, as far as representing numbers goes. And once we said we can represent numbers, obviously we look at other mathematical entities. And an obvious mathematical entity that we have is a vector or an array. An array is a general idea in computers. So, whether you can represent vectors and matrices, okay. You can represent matrices, but if you want, most programming languages will not do the matrix algebra. They will not do the matrix addition. You have to do it, okay. Most programming languages will not do that for you. There may be a few that do it, but most will not do it for you, okay, right. So, it is possible and how did we represent, how did we represent matrices? So, we represented matrices and vectors by basically, well vectors we represented by basically locating in memory in a sequential fashion, right. We were able to say that there are, so the 1, 2, 3, 4, 5, 6, 7 elements, right, placed adjacent to each other, could represent a vector, right, in a one-dimensional array. Inherently, so we are using the term array, right. And of course, if you wanted to go make, get a matrix, you could then stack one on top of the other, okay. That was the idea, right. So, it is actually possible to get a single subscript object Ai or a double subscript object Aij, okay. Or even a triple subscripted object and so on. And then what you do is, you take whatever it is that you have stacked in the second dimension, you would stack it in the third dimension. So, then you know that you can actually create multi-dimensional arrays. It is actually possible to create multi-dimensional arrays. Is that fine, okay. So, by now what we have done is, we have managed to represent integers, numbers, real line. We are able to represent, then using those integers and real numbers, so to speak, real line. We are able to represent matrices or arrays, right. One-dimensional arrays, two-dimensional arrays, multi-dimensional arrays. So, the second thing to do is now we have to state where we are talking about representing functions, right. So, what did we do after that? What did we go about that? We did a little, we set up the, we looked for function basis. So, we did a little review of vector algebra, right. We showed how we constructed vectors in the normal vector algebra. When I say vector algebra, the algebra that you learned in your college, we did a little vector algebra and showed that well if you had the dot product, it was enough for you to actually construct, okay. It was enough for you to actually construct a set of basis vectors and so on, given linearly independent set of vectors that span that space that you are interested in, okay. So, what we did was, we constructed box functions and here, this was the first thing that we did. We constructed box functions. So, the box functions were two kinds. We constructed a box function f and g. I am not going to do all the details and we said go, it is a quick recollect, right. We constructed box functions f and g and basically said that on non-overlapping intervals, if the box function is 0, right, in this case, it is 0 here, 0 here and the intervals do not overlap. Remember, wherever the function is non-zero, it is called the domain, part of the domain where the function is non-zero is called the support of the domain, okay. So, the support of the function, the supports of the two functions are non-overlapping and the functions turn out to be orthogonal. That is one way to get orthogonality. The other way that you get orthogonality is that the functions somehow change signs appropriately, right, electric functions, sin x and cosine x, right, occupy the same interval. They are defined on the same interval 0 to 2 pi but they are still orthogonal to each function, okay, right. So, we were able to show that we could represent functions using function basis, using the box function and we saw again that there is a representation error, okay. We again saw that there was a representation error. For the box function, we asked ourselves the question, what is the nature of the representation error. The reason why we ask ourselves what is the error? The question is important, is can I get an idea of what is the error? Because if you know what is the error, you can then try to get an idea as to where the error is coming from and then there is scope for improvement, okay. So, as I keep repeating, it is not just enough that what we do, that those are skills but you should also see there is a higher level skill that you have aimed at. You have to look at the process that we are going through, right. So, you always ask, so you do something, then you have to ask the question, how good is it? Is there a way that I can get an idea as to how good is it? Because the minute you are able to quantify it, then there is scope for improvement, okay. That is very important. So, we asked the question this representation error and then our analysis basically showed me these are constants. So, you can represent constant functions exactly and nothing else, okay. So, basically what we then did was, we said we will go to higher order polynomials. Of course, one of the disadvantages this had was that representation using box function, what was the bad thing that it had? You had a lot of jumps. There are a lot of jumps. So, in fact, I think I used a straight line. You get a lot of jumps. They are non-overlapping. So, I have to make sure that you get a lot of jumps. It gets, the function is very jumpy. You can get closer and closer to the solution but the function just gets jumpier and jumpier, okay. The alternative to this was saying that well, we got orthogonality this way but we know that using something like the Gram-Schmidt process or whatever, right. We could make functions. You can get come up with an independent set even if they are overlapping. So, we looked at the set of polynomials 1x, x squared and so on, okay. On an interval, we have to specify the interval, on an interval. So, on –1 to –1 for instance, 1 and x are orthogonality each other. On 0 to 1, they are not. If they are not orthogonality each other, you can subtract from x, it is projection on 1 and get an orthogonal set. You understand what I am saying and then you can build up an orthogonal set using. That is what we saw, right. However, just like sine x, cosine x and so on, this has a problem. So, on the whole interval on which you are trying to represent a function, you represent the function using a series of 1x, x squared and so on or an orthogonal set formed from it, any change in coefficient will change the function on the whole domain. Whereas, box functions have something called locality, right. So, I want to recollect that locality. Box functions have something called locality which allows you to locally change the value, locally move the function value, the representation without affecting anything else where, okay, right. So, now we are able to represent functions. We got, we basically combined this idea of this with the non-overlapping, what should I say, intervals or supports to construct hat functions. To construct functions that look like this, just call it f, to construct hat functions, okay. Hat function was 1 at the node at which it is defined and 0 at the adjacent grid points, okay. 0 at the adjacent grid points. Is that fine? Everyone, okay. So, we were then able to get linear interpolants. We were able to get linear interpolants. However, what was the disadvantage? That you have overlap of adjacent, right, which can create interesting things like in the last but one class. It can create interesting situation, okay. But you have overlap and it is always a problem. So, orthogonality, that is the reason why we seek orthogonality because otherwise you do not have, right. But on the other hand, you get linear and we saw that you could do not only linear, you could do quadratic, you could do cubic. Am I making sense, right? So, you can, your interpolants, your basis functions can be any order that you choose. There are many ways by which we can use it, okay. So, I think I did, I mentioned quadratic and I did cubic in class, right. If I remember right, okay. What else could you do? What did we do after that? We got an estimate of the representation error here. Then we basically turned around and said, well, if I know the function, I can use this for the representation. But what if I do not know the function I had of time? That was the motivation that I used, okay. If I do not know the function I had of time, what do I do, right? So, then we used Taylor series and we came up with a finite different representation. So, I am not, the finite difference representation of course, basically made use of the fact, I am not going to use i's and j's as the indices. I am going to just make a general statement made use of the fact that if you have the function value f of b and you have the function value f of a, then what is fb – fb – fa divided by b – a? This looks like something for mean value theorem, right. So, mean value theorem says that there is some point in between some point unknown location psi at which this is the exact derivative, right for a continuous function there. Now, what we know is that if I represent this using Taylor series, if I represent this, if I say, see fb – fa divided by b – a is a number, right. It is a number. Just say it is 2. So, you calculate this and you get the answer 2. What it basically says is if you say 2 is the derivative there, the representation is first order representation. The error is proportional to b – a. If you assume that it is the derivative there, if you say 2 is the derivative there, that is the approximation that you are making, then the representation is first order, right. The representation is still first order, the sign of the error changes and if you assume that it is the representation of the derivative of the same number 2, if you assume that it is the derivative at the midpoint, then the error becomes second order b – a whole square, okay. Is that fine? So, here it turns out that the derivative corresponds to the function being represented by linear and there it is actually quadratic, okay. You look at the term. It is as though the function is represented by a quadratic, even though it is actually, you have drawn a straight line, okay. So, the same number and as I said, unfortunately, mean value theorem says that at some unknown point it is exact but we do not know. That is the whole deal, right. So, at some unknown point in between, it is exactly the right, it is and you know, we are looking at it graphically, you can just basically say what is most probably at that point, right. Graphically, yes. But in reality, we do not know what it is. So, that is basically the idea of, so finite differences and we found the truncation error using Taylor series, right. Truncation error. So, I introduced that term. Truncation error is if you have an infinite series, you truncate, you throw away, you eliminate all the higher order terms and retain only the leading terms. The error that you do, you make doing that is called truncation error. What else did we do after that? What was the, once we have finite difference represented, so we can represent first derivatives, second derivatives, we figured out, we showed that we could represent higher order derivatives, right. Then we turned around and said, okay, why do not we apply it to solving an equation and we took Laplace's equation as the start, right. And Laplace's equation was relatively easy. Laplace's equation is relatively easy and we basically showed that Laplace's equation, nabla squared phi equals 0. This is an equation that I have written many times in this class. So, you could do, right, phi at tq was a quarter of basically the average of the neighbors, okay. Summation neighbors, right, neighboring fees. So, it is the average of the neighbors. Laplace's equation turned out to be averaging of the neighbors. And we use that fact that it was averaging to actually show that, what did we show? We showed that the solution is unique, you do not understand that two of you cannot get two different answers and so on, right, that the solution is unique. We use that averaging to show that there is a principle called maximum principle that it satisfies that the maximum and minimum in this case actually occurs on the boundary, okay. We proved the consequence that maximum and minimum occur on the boundary. Am I making sense? And use that then show that the solution was unique. For the numerical case, you may have seen for the continuous case, for the discrete case. We showed it only for the discrete case, okay. So, what else did we do with Laplace's equation? What did we do next? We looked at some acceleration schemes. In particular we looked at SOR. So, we basically said that well, if you have a phi that comes from this, instead of calling it phi n plus 1, we called it phi star. And it is possible to take a linear combination of the proposed solution and the current solution. So, to get phi n plus 1, you could get omega times phi star plus 1 minus omega times the old one, okay. And we looked for ways by which we could find the optimal omega. Like we said, this is a situation where we are looking for non-uniform convergence. If it is uniform convergence then omega, it does not help. We want as a function of, we want the convergence rate to change as omega and we found that there is actually a way to find the optimal omega, okay. By systematically hunting, there is no analytic method by which you can do it but you can systematically hunt for the optimal omega. Is that fine? We also showed that solving this was the same as solving a system of equations. Solving this is the same as solving a system of equations. So, we had what? We showed that it was the same as solving x tilde equals b tilde, right, which was the same as minimizing a functional q of x tilde equals minus 1 half or 1 half. It depends which way you want to take it, x tilde transpose a x tilde minus b x tilde b x transpose b because I think I did that at least twice. So, we went through some interesting errors there but anyway it is fine, okay. So, we basically showed that this was minimizing this, right. It is the same. Minimizing this is the same as solving that a, remember is symmetric. That is an important property here. a is symmetric, a equals a transpose, a is symmetric, okay. a is symmetric. And there are some interesting properties but anyway we would not worry about it. And we showed for SOR as a consequence why omega has to be in the interval in the range 0, 2. So, that we got an idea that we have to pick omega within the range for this linear equation. Omega should be within 0 and 2. And we looked at hunting, we looked at a few demos, you have to write it out, okay, right. And we found that for Laplace's equation at least that omega is very close to 2. What was the other thing that we did? Anything else? Laplace's equation. Then we looked at what is the convergence rate basically, right. We looked at how quickly does it converge. And indeed we found that because again using the same principle we found the keyword that you used with error. We found out what is the error. And we basically, once we got the expression for the error, we asked the question how quickly is it decaying. And that gave us an idea of convergence rate. In a funny fashion I introduced the ideas of eigenvalues and eigenvectors and the spectral radius, largest eigenvalue, right. So, rho of a or rho of I think if I remember it I call the iteration matrix P sub j for Jacobi iteration. Rho of P sub j is the value of the largest eigenvalue or the largest eigenvalue, okay. Magnitude of the largest eigenvalue. It is called the spectral radius of that matrix. And we basically showed that you want the spectral radius of the matrix to be less than 1 for convergence. That is fine. But that is not enough. That is okay if you are saying does it converge eventually. But eventually is not good enough, right. And we found that actually for this class of problems rho j, rho of Pj is extremely large. It is very close to 1. It is very close to 1. So there are some issues there, okay. There are some issues there. SOR is one of those things that sort of helped us fix that issue. But there are some issues there. Then what did we do? We looked at, we changed gears and looked at linear wave equation, right. We looked at linear wave equation. Linear wave equation, Laplace's equation is averaging, right. Averaging basically means if there are differences, eliminate differences. Linear wave equation was different. Linear wave equation is propagating. Whatever there is is carried. It is like a stream. In the example, I gave a stream of water flowing at Choctus. Choctus is carried at that speed, okay. So linear wave equation basically looks like and we tried a variety of schemes for this. Just like things work for Laplace's equation. We just tried a variety of schemes for it. That was sort of deliberately set up because I wanted to make sure that since Laplace's equation everything worked, I wanted to do the same thing and I mean, we obviously know. I obviously know that they are not going to work. We tried forward time central space which was unconditionally unstable. So the stability analysis, we looked at the von Neumann stability analysis, right. We basically linearized stability analysis. Anyway, it is already linear. So it does not matter. But we added, right. We looked at the same kinds of exponentials that we looked at when we are looking at the error term here and showed that forward time central space was unconditionally unstable. There is nothing that you could do, right. Forward time central space was unconditionally unstable. In all of this discretization, of course, we came up with a parameter which we called sigma which was the Courant-Friedrich-Lewy number, okay. And this number turned out to be pretty critical, right. So anyway, maybe I am getting a little ahead of myself here. We also did FTFS. I should not forget that. It is also unconditionally. And then finally we found, we did FTBS forward time central backward space and basically showed that if the Courant-Friedrich-Lewy number is less than 1, we had a condition. We got a condition. FTFS in fact basically gave us the idea. And from here we got the idea of upwinding. That is, FTFS did not work but FTBS worked because A is positive. And if the sign of A changed, if A became negative, then FTFS would work and FTBS would not work, right. So the equation is propagating U in a certain direction. I generally use some information but the equation is propagating U in a certain direction and your scheme also it looked like needed to propagate U in the same direction. Is that fine? To which we asked the question, so what is the difference between FTBS and FTCS? Well, I guess before we did that, what did we do after that? We looked at the modified equation. We asked ourselves the question, what is the problem that we are actually solving, right. And we got from that the modified equation. The modified equation looked something like this. It had terms which were of that form, higher order derivatives. So we asked ourselves the question what each one of these things do. In fact, we looked at it up to the fourth derivative and we concluded that the odd derivatives are dispersive in nature for this equation and even derivatives are dissipative in nature. Whether the U decays or is amplified depends on the sign of U and the nature of the derivative. So in this case, mu 2 has to be positive. In this case, mu 4 has to be negative. That is what we conclude. And this is just dispersive. So we saw decay and we saw dispersion. Dispersion basically means high frequencies and low frequencies travel at different speeds. So which is a very critical, very important phenomena. Various frequencies travel at different speeds. So then we came back and asked ourselves the question, what is the difference between FTCS and FTBS? How come this works and this does not work? And we realized the difference was that in one case, the mu 2 was negative and in the other case, the mu 2 was positive for the modified equation. So the natural question was, cannot I just use FTCS and add a little amount of artificial viscosity, which we have tried, it actually works. And that FTCS, the relationship between, so any scheme literally can be written as FTCS plus an appropriate correction. If you choose to do it. So that is as far as, so from here, did we do anything else with linear wave equation? Then we looked at the quasi-linear wave equation and showed that and of course the general form, where f is a function of u. And we showed that in this case, in this case, where there were no discontinuities to start with and the original function is smooth, the initial condition is smooth, the discontinuity can form. So it is an interesting combination. On the other hand, you have diffusion or Laplace's equation kind of which is averaging out and eliminating discontinuities, high frequencies are decaying faster than low frequencies. That is one of the outcomes of this which I forgot to recollect, important outcome, high frequencies decay faster than low frequencies. So that being the effect of the right hand side, this is just interesting that this was creating the high frequency. And I pointed out that if you have in a different context, I pointed out that if you have u dou u dou x and if you substitute at sin theta, u is like sin theta, then dou u dou x is like cos theta. So sin theta cos theta is like sin 2 theta. So there is an increasing, there is a mechanism that increases the frequency, that doubles the frequency, which is very critical. So this quasi-linear, this term seems to do that, it seems to increase the frequency. On the other hand, this seems to try to decay and that combination is what of course makes fluid mechanics so interesting. So then from here, what else did we do? Of course, we derived the, if you get a discontinuity, it is called a shock and we derived the Rankine-Huguenau conditions for the shock speed and so on. Then what else did we do? We looked at one-dimensional flow, we derived the governing equations, looked at one-dimensional flow. We tried the FTCS plus dissipation for it. We tried first of all to make the equation look like this linear wave equation. So we got it in a, so we wrote it in two forms, we wrote it in a conservative form and a non-conservative form. Maybe I will just write the equation. dou q dou t is dou e dou x equals 0, q was rho u rho total energy and e of course is rho u rho u square plus p rho e t plus p times u. These variables are said to be the conservative variables written in a divergence-free form, mathematically said to be written in a conservative form. For us, of course, from gas dynamics, we know that across the shock, these quantities are conserved. So you can give any reason you want as to why you want to call it conservative form, but these are called conservative variables as a consequence, even if you write it in a non-conservative form. So you can use, you can still write this in a non-conservative form and sort of a desperate attempt to make it look like the wave equation, linear wave equation. Where this term, the flux Jacobian is dou e dou q, okay. The flux Jacobian is dou e dou q and then we say rho at the system is coupled, isn't there something that we can do to decouple it. We tried to do, change the variables from q to q tilde which was rho up and that did not help either. That still gave a couple of system of equations, dou q dou t plus a tilde dou q tilde dou x equals 0 in terms of q tilde which are not non-conservative variables, certain combination. Of course, we could have used rho u t also, right. So and it turned out this was still coupled. Then we said wait a minute, I can relate these two to each other with a similarity transformation. I can transform this equation to this equation and the a and a tilde will be related to a similarity transformation. We asked ourselves the question, is there a transformation that will diagonalize the matrix, right. So if you use the modal matrix away or you use the modal matrix away tilde, the matrix made up of eigenvectors away, then it is actually possible to diagonalize it and we got the characteristic form, okay, where the system of equations was diagonalized and it was decoupled, right. So we got three equations that were basically propagating, they were propagating like the wave equation, okay, right, but not linear wave equation because they happened to, it was a quasi-linear wave equation. But at least our analysis would work and the CFL condition for this, if we do a discretization and we got the Courant number at least, the Courant-Radix-Levy number would be of the form u delta t by delta x, that would be sigma, I guess you could say u plus a or mod u plus a or whatever, that would be the largest Courant number, am I making sense, okay. And the typical stability condition would require that this is less than 1, is that fine, okay. We are using mod u because u can be positive or negative, a is the speed of sound, okay. Are there any questions? Anything else, what did we do after that? We looked at the critical part of applying boundary conditions and applying boundary conditions we decided to use the fact that this equation is propagating in a certain direction, determined by a. So at a subsonic inlet, at a subsonic inlet, so the eigenvalues for the lambdas in that equation will u, u plus a, u minus a, raise the speed of sound. So at a subsonic inlet, these two are positive, that is negative. So at the inlet, these two are propagating into the domain, this is propagating out of the domain and we use that to determine boundary conditions. So typically at the inlet we applied, we applied for flow through a pipe, P0 and T0 at the inlet and P ambient at the exit because it is a subsonic exit, this is negative, this is positive, these two are positive. So these are propagating out of our domain at the exit whereas this is propagating back into the domain, am I making sense, is that okay, fine. So we needed three quantities, we have developed, we have got three quantities but actually at each point we need to get three each, so we have to get three more, right, which we did by extrapolating. So if I actually write the domain and show the grid points, the first grid point and the penultimate grid point and the last grid point, then we basically said that there are certain quantities that have to be extrapolated. We explored various things that you could use to extrapolate to the boundary, right and this is very important. So the emphasis here is there are boundary conditions that you recollect, there are boundary conditions that are required by the physics of the problem. You have a pressure vessel, if the air in it has a certain P0, it has a certain T0, these are measurable quantities. You have a pipe, you have a valve, there is an ambient pressure, open the valve, the ambient pressure and the P0 basically determine what is the speed with which the air is going to flow, am I making sense and the T0 will help you determine the other parameter. That is what the physics requires. The mathematics basically says that you need three quantities because plus an initial condition because you have a single time derivative and you have a single spatial derivative, first order spatial, first degree spatial and we have three quantities but the numerical algorithm insists that you need Q here and Q there and therefore you have to generate, the numeric requires more boundary condition. We have to actually generate those boundary conditions, we have no choice, is that fine, okay. So then of course we also wrote, one important thing that we did was we wrote this in the delta form and this is a, we wrote the equation in delta form, I will just write one, none of these, these are just to recollect, R is the residue, I have not defined in the recollection, I have not defined the residue so far. The residue is if you have an equation and you substitute a potential solution into that equation, if the right hand side does not match whatever is left over is called the residue, okay. So if you have LU equals 0 and you substitute a candidate U and you do not get a 0 but you get some value that is called the residue, whatever it leaves is called the residue, that is the residue from this steady state Euler equation and this is the delta form, this is the correction to the current state in Q that you have, current candidate solution that you have. The big thing that we want to take from here is that given this R will determine if you reach the solution or not, if the residue is 0, your current state is correct, if the residue is not 0, your current state has a problem, needs to be corrected, you can use the solve the system of equations to determine the correction, if the residue is 0, the correction will be 0, if the residue is 0, the correction is 0 as long as this is not singular, it can be anything and therefore it is possible for us to choose something that will make the convergence faster, that was the other deal that we have. So we do this carefully because this determines when you reach the solution, you do this carefully because that will determine how fast you get to the solution, is that fine, okay, right. And then we did a little more involved boundary conditions, application of boundary conditions here, right, using converting it to characteristic coordinates and so on, okay, is that all? Did we do anything in the, then I wrote out the equations for the quasi 1D, right, quasi 1D and that was basically about it, right. What did we do after that? We looked at unsteady flow, what did we do after, we looked at, I think we spent a little time on unsteady flow, right, we looked at the preconditioning the unsteady term, is that what you said? So we use this, see there is a certain freedom that this gives, this argument gives, that is the reason why I would not emphasize this, that is this determines that you have the solution and therefore if this goes to 0, the correction goes to 0 and therefore whatever multiplies it goes to, see that gives you a certain freedom and we look at it and say if I am looking for the steady state solution, dou E dou x goes to 0, right, dou q, if you look at steady state solution, dou q dou t goes to 0 and therefore if I multiply dou q dou t by some matrix, it does not matter. So why would I do that? I do that because the eigenvalues happen to be u, u plus a and u minus a and the problem gets difficult when either u minus a is close to 0, that means you are going at transonic speeds, near transonic speeds or u itself is 0, the problem becomes extremely stiff, the eigenvector, eigenvalues become very disparate, propagation speeds become very disparate. So by pre-multiplying this by gamma, it is actually possible for us to precondition the problem and if I am, since I am looking only for the steady state, this pre-multiplying by gamma dou q dou t does not really affect the steady state but affects the rate at which you are going to reach, same idea as that, right. There is something going to 0, I can multiply it by whatever I want as long as it changes the algorithm so that I get there faster but where I go is the same destination, that is the key, okay, that is the key, is that fine, right. Then of course we looked at unsteady flows by adding a pseudo-time term, right, of course you can solve this directly using, I think post my demo I did Runge-Kutta method, right and you can solve this equation without the gamma of course if you are looking for the, if you are looking for the unsteady solution, you can solve this directly. So if you are looking only for the steady state solution, this is called a time marching scheme. If you are looking for the transient, we are looking at a time accurate computation, you could use Runge-Kutta or whatever it is but what we said is we have built up so much machinery for steady state solution that if you add a pseudo-time term, dou q dou tau and you could then because that is going to 0 multiply that by a preconditioning, right, you could actually converge to the steady state and tau and get the unsteady solution in real time that you want, okay. So we were then talking about acceleration schemes, right and then we went back and basically remembered that high frequencies decay faster than low frequency, we recollected all of these, recollected all of these things and of course in my review, I think I have left out representation of functions, the demo that, the critical demo that I did of high frequencies and low frequencies but it does not matter, right, that high frequencies decay faster than low frequencies, right and we basically came up with the multigrid scheme. So the demo that I am talking about is where we represented sin x using, the demo that I recollect the demo, I will go back to the demo, we talked about representing sin x using hat function, that was a critical demo for us because it showed that for a given grid there was a highest wave number that we can represent, that is very important or turning it around whether the wave number is high or low depends on the grid that you are talking about and if you say high frequencies decay faster than low frequencies then you are interested in using a coarse grid because the wave numbers then become higher frequencies, what are low frequencies on a large grid become higher wave numbers, right on a coarse grid hence the multigrid scheme. So what you basically do is you use grids of different sizes, right, you use grids of different sizes h, 2h, 4h, 8h and you transfer the residue, this is a critical part, you transfer the residue from the fine grid to the coarse grid and you transfer the correction from the coarse grid to the fine grid, okay. So it is possible for you to actually go through this process of transferring the problem in a sense by you transfer the residue or transferring the problem, transfer the correction you are transferring the solution in a sense if you think about it, okay, so it is actually possible for you to run your program on a very coarse grid, you run your problem on a very coarse grid to eliminate to so that you convert what seem to be low frequencies on h to a relatively high frequency on 8h, is that fine, okay. So multigrid schemes of course and then we said though there is possible actually in fact why would we do that you just basically start with the coarse grid and transfer in fact when we talked about Laplace equation in beginning that is one of the things that we suggested that compute the solution on a coarse grid and transfer it as a initial condition on the finer grid, finer grid, so you could actually do this here, you can start on the coarse grid, get iterate a few times for Laplace equation or take 10 time steps for Euler equation or whatever and transfer it to the next finer grid, so you could actually go from 8h, 4h, 2h, 2h and then start your go through this process and typically the critical thing that you want here is what is the thing that we have to recollect here, work units that is the that is what you always work units that one sweep on the finest grid in one dimension is equivalent to two sweeps on the coarser grid, equivalent to four sweeps and if you go to multiple dimensions of course that gets even better, so you would expect that in three dimensions multigrid will work extremely well, the actual convergence rates that you will get in wall clock time, how quickly you get to the solution will be much better, what did we do after that after multigrid methods, calculus of variation, we looked at, so calculus of variations basically comes back and says so we are back to this business I am representing a function, the only thing is now you actually have a measure of something, so you say that I have, if I think the example that they gave is if you are coming from your dining hall to its classroom what is the shortest path that you have, so you can come up with some kind of a measure or a metric to see which is a function that you want, you are looking for the, you have a functional, you have a measure and you want to minimize that measure and from there we derived the Euler Lagrange equations, we showed then that the relationship between the variational problem and the differential equation because it involves optimization, there is a differentiation and setting equal to 0 kind of a process that you do to get the Euler Lagrange equation, so in that sense the Euler Lagrange equation is like a derivative, right, the Euler Lagrange equation is like a derivative, right, so you differentiate, find the first variation and you get this derivative and we showed that Laplace's equation, we found and looked at the variational form of Laplace's equation and showed that you could get Laplace's equation directly, which was analogous to the earlier ax equals b being a minimization of a quadratic q, so it is analogous to that, okay and we basically said that yes, so it is possible for us sometimes to solve it in the variational form, if you get the variational form the amount of smoothness required of the function is not as much, smoothness required of the function is not as much, you do not need as many derivatives, right, so it is possible for us to reduce the derivative requirement, fine and very often the expression in the variational form is simpler than in the differential form, very often that is so, but it is not always easy to get the variational form given the differential form, okay, you do not get something for nothing, right, so there is an element, there is a difficulty in doing that, okay and finally of course we end with today's class, so I will just recap, put it back to the big picture, so the big picture possibly is that this is the reality as we see it, so you have, right and of course I have put this little brown to indicate and from here reality you get, this is the perception, this is what you see, so this is what it is, I just draw some, I can draw anything there because we cannot see it, this is what I am saying, this is the perception and from this perception we come abstract out a model, it would be ideally in a figure that I drew to do an exact circle, right because we have mathematical precision, we abstract out a model and then of course we cannot even solve the model, we cannot even get this model, so we end up doing a computer program which is discrete, the idea that is, that is, that is pictorially that is what I am trying to represent here, so this is modeling, this is discretization if you want, you can check consistency whether this model is consistent with this as the number of sides that polygon increase does it become the circle, its consistency and you can do validation, you can check, perform an experiment and you can check whether the result is, fine, cannot do that, the scientists and everybody constantly trying to do that, am I making sense and of course here there is, so there is a test that you do here also, you can add terms, remove terms, all the flow is inviscid, the flow is viscous, right, so all of that, all of that stuff is happening here but once you decide to do the Euler equation you go back here, so there are two possibilities, either you say this solves this and therefore I validate there but you look at this, you know they are not, so validation is validation, that is it, right and no more than that, fine, okay, thank you very much.