 What we try to do today this first session in the morning the first part of first session actually is to try to wrap up this discussion on finite difference solution. So, let us quickly look at what we had started in the final session yesterday and we are explaining this finite difference approach using a specific example of the transient flow between infinite parallel plates. The reason is because we have already worked out the analytical solution of this problem and it will serve as a benchmark was the idea. So, if you recall what we do in any numerical method or numerical solution method is that we divide the domain into a set of grid points or node points and in this particular case for simplicity we have chosen n grid points 1 on the left boundary capital N on the right boundary at y equal to h and then equi spaced at a grid spacing of delta y. In the case of finite differencing what is done is that the partial differentials in the governing differential equation are replaced with approximations and those approximations are obtained using the Taylor series expansion in appropriate manner. So, what we did was first we will we found the or we determined the finite difference approximation for the second derivative of u with respect to y at a given grid point and it comes in the form of the adjacent grid point values u i plus 1 and u i minus 1 and the grid point value itself divided by this square of the grid spacing. And then for the time derivative we looked at two options when we actually combine that together rather with the spatial derivative one is what we will call a forward time central space method then which is what we call later as an explicit approach and then toward the end we were discussing the implicit approach. So, here is the explicit approach finally, if you see what we what we discussed yesterday is that the explicit approach turns out to be a fairly straightforward approach in the sense that we start this solution at time equal to 0 with the specification of the initial condition and move in time in the steps of delta t marching forward. So, knowing the solution at initial time t 0 or at any given time level n in general. So, n equal to 0 is a special case at time equal to 0 what we are in a situation to see as far as the explicit method is concerned that the entire right hand side in this boxed equation is known at the given time instant and then immediately using the entire right hand side known you can find out the updated value at the grid point under consideration for the new time step n plus 1. So, simply knowing the entire previous time step value or the current time step value I should say we can find out 1 by 1 the values of the variable at the grid points at the next time level. And this turns out to be fairly straightforward when it comes to implementation in a computer code. So, explicit method in general is very very simple it is very popular also the only issue is that it will explode or blow up if you do not maintain the time step under a certain value. And I mentioned yesterday that the determination of the appropriate time step value is achieved using what is called as a stability analysis. And although we are not going to do the stability analysis here I am simply pointing out the result that comes out of that stability analysis for the particular equation that we are dealing with which is the diffusion equation. And it turns out to be stable this explicit method if the parameter nu times which is the kinematic viscosity in this case nu times delta d over delta y squared if it turns out to be less than half. So, this you can actually show rigorously and formally through that stability analysis which most of the finite differencing books at the elementary level will have. Finally, we were talking about the implicit method and what we realized is that in the implicit method which is also called as a backward time central space method if you want we end up with a system of linear algebraic equations for the unknown interior grid point values going from i equal to 2 to n minus 1 capital N minus 1 when we want to determine the variable value at the n plus first time level. So, let me let me put up this matrix representation finally, many of you would probably remember and know that a system of linear algebraic equations can be represented in a matrix form as a coefficient matrix multiplied by the vector of the unknown variable which is in our case u 2 to u capital N minus 1 and that is equal to a right hand side which is essentially known. And what ends up happening in the case of this implicit method then is that every time step you have to solve this linear system with an appropriate algorithm and when you solve it you obtain the value of or values of the dependent variable at all grid points. The slide that is present projected right now on the screen shows the matrix where I have included the boundary conditions also. So, remember that in our particular problem here that we are dealing with the boundary conditions are that at all times u suffix 1 that is the u value at the left grid point is equal to 0 and the u value at the right grid point on the boundary which is capital N as we said it is equal to capital U. So, obviously, u 1 is representing the value of u at the lower plate if you want to relate it to the physical problem u suffix capital N relates to the value of u at the top plate which is moving at the speed of capital U. So, many times you can actually remove these boundary conditions in fact most of the times if you see the matrix representation will be written only for the interior grid points for some reason I have included the boundary conditions here as well. So, you can discard the first row and the last row altogether from this matrix representation and let us rewrite the matrix then in terms of only the values of u 2 to u capital N minus 1. So, I will switch to the white board and will write this for u 2 to u capital N minus 1. We had written if you remember a few equations explicitly in the sense that we had chosen different values of i equal to 2, 3, 4 etcetera and we had written down those equations essentially what I am going to do is I am going to take those equations again in some sense and put them in a matrix form. Only thing is that I will not include the first row and the last row which corresponds to the boundary condition on the left and right or top plate and bottom plate and top plate if you want. So, without that let me write the matrix again. So, let us just look at the matrix and the vector as we say column matrix of the unknown variables and we will write the right hand side separately because I have run out of space on this on this paper. So, if you see I have written only the non-zero elements that will occur in the coefficient matrix. So, this is our coefficient matrix here and this is our column vector if you want. So, if you see only non-zero values for this coefficient matrix will occur along the main diagonal and all of them are equal to that constant we determined yesterday equal to b that comes from the formulation if you go back a couple of slides you will get the expression for b as we had seen yesterday. And the other two non-zero diagonals that remain in this matrix are the one above it and below it and that is about it. So, this is what is called popularly as a tri diagonal coefficient matrix and essentially a very efficient algorithm to solve this linear equations linear system of equations is based on basically this Gauss elimination method and that is what we will talk about, but before we get on to it let me just complete writing that right hand side of the linear system of equations. I will simply write it here on the left part of the paper and that is about it. So, as you can see the right hand side contains those constants c which are known at any given time step if you go back to our previous set of slides and also I have transposed the two terms containing the boundary conditions which are also known as the known in the known right hand side and that is how we can look at this entire linear system of algebraic equations as a tri diagonal as having a tri diagonal coefficient matrix. So, now let us look at that tri diagonal coefficient matrix again and let us see what this Thomas algorithm or the tri diagonal matrix algorithm which is essentially a special case of the Gauss elimination process will do. So, let me again draw this coefficient matrix and again I am restricting only to the grid points which are interior going from u 2 to u n minus 1. So, this we had written just a few minutes back my drawing is not that great. So, let me essentially go up to here. So, in that sense the matrix needs to be closed here ok. The idea is the following that if you remember your basic numerical methods the way the Gauss elimination process works is that it creates a upper triangular matrix from the coefficient matrix and the TDMA works exactly in the same way or the Thomas algorithm works exactly in the same way we already have all 0 elements here and 0 elements here. So, we do not have to bother about those. So, the only elements that need to be converted into 0 in order to make this coefficient matrix upper triangular is these points a along the diagonal which is just below the main diagonal. So, if these are converted into 0 then what we will do is we will achieve a upper triangular coefficient matrix. Now, in this case of course it is not really technically an upper triangular coefficient matrix because as we see here the elements on the top here are all 0. So, it turns out to be an upper bi diagonal matrix is required or it will be obtained as a result of our Gauss elimination process. So, how do we get rid of this element for example and make it 0? What we will do is we will form the ratio of this element to the element that is above it and subtract the rather multiply that ratio by the element itself or this element rather and obtain 0. So, let me just write it here the way it is going to be done. So, a will be replaced as a minus a over b times b and that is how you will convert that to 0. So, in other words we form the ratio of this first element divided by the top element and then multiply that ratio by the top element and subtract it from the element under consideration in the matrix and thereby we obtain the value of 0 and similarly for the remaining a's along the lower diagonal here. In doing so what we are going to need is appropriate modifications of the elements b also to be required to be done for each row. So, we visit row by row and change the a's to 0. In fact in the algorithm when you put it in the computer you do not have to actually store the value of a which is going to be now changed to 0. We just know that it is going to be 0 by putting the appropriate row transformation. So, we do not even bother storing that the only change in the coefficient matrix that is stored in a computer as part of the calculation process is how these b values are changed as a result of these row transformations and that is about it. What we do in this case also we take the same ratio a over b which is actually the ratio of the previous element to the back element and multiply that by the top element here which is also a as far as this b is concerned and subtract that to obtain the new value of b which is shown by the square bracket and so on. So, each of these b's will be one by one change in this process and if I go back to my document and go to the next slide this is what the algorithm is which I just described. So, all that we do is we calculate and store only the main diagonal elements in the sense that the change in the main diagonal element. What we realize is all these a's here that my highlighter is now pointing out will be essentially going to be made to 0. So, there is no need to unnecessarily waste computer memory by storing a 0 value. We want to only store the changes in the values along the main diagonal and those are the values that are only calculated and changed according to the algorithm that is written out here. So, that this tri diagonal matrix of the algorithm or Thomas algorithm as we call it is basically a Gauss elimination process for a tri diagonal system. So, we have already seen what the tri diagonal system is we have a I would imagine that most of you have a reasonable idea as to what Gauss elimination process is and even if you do not we have just described it actually right now. What you do is you eliminate the elements in the diagonal which is lower with respect to the main diagonal and make this entire set of a's equal to 0 by doing appropriate row transformations and in doing so you need to change the values of these main diagonals also. The top diagonal also does not need to be bothered at all the reason is because if you see for each of these a's in the top diagonal the element that sits on the top is equal to 0. So, nothing is going to change when you complete these row transformations as far as this element a in the upper diagonal is concerned. So, the only requirement as far as computation and storing is the changes in the main diagonal elements which are these b's and they are changed according to this formula the one that is written at the top. So, it is what is called as a forward elimination step as you may remember from Gauss elimination. In case of a standard Gauss elimination situation the coefficient matrix does not have all these all these 0 on top and bottom in general. So, if you look at the standard Gauss elimination as perhaps you have already seen before this forward elimination leads to what is called as an upper triangular matrix as we remarked earlier. In this particular case because the original coefficient matrix itself is tridiagonal the forward elimination process results into what is called as an upper bi diagonal matrix which simply means that the only two diagonals that remain after the forward elimination process is complete is the main diagonal and the one which is above it. So, the main diagonal will remain of course values will be changed and the diagonal which is above it out of which as I already said only the changes in the elements along the main diagonal are required to be stored and this is what the statement to change the elements along the main diagonal is the one that my highlighter is now moving on. So, capital A suffix i comma i is simply one of these main diagonal elements. So, you can see that all of them are b to begin with. So, the way the calculation proceeds is actually you have to build this coefficient matrix first. So, it is very easy to build it in this case because you can see that only the main diagonal values need to be initialized and the upper diagonal values sorry. So, the elements along the main diagonal which are changed is according to the expression here. So, A suffix i comma i is changed as its old value minus the ratio of A i j minus 1. So, the ratio of A i j minus 1 is essentially this element divided by the previous element which is at the bottom A i minus 1 i minus 1 and then multiply that by the element which is sitting on top of our main diagonal element which is given by A comma i minus 1 i and that is it. So, if you implement this now in this case I have written this from i equal to 2 to n minus 1 and the only reason I wrote 2 here is because I had included that boundary condition at the left end at the top in the actual code that will be given perhaps it is not done this way maybe it goes from 3 to n minus 1 or maybe it is exactly the same one way or the other it does not make any difference in the values that will be calculated. So, when we perform this row transformations the first thing that is done is the values of these diagonal elements are changed, but sorry appropriately you have to change the right hand side as well. So, the right hand side elements which are also changed in exactly the same manner namely that same ratio we choose A suffix i comma i minus 1 divided by A suffix i minus 1 comma i minus 1 the same ratio multiply that by the top element. So, I am let us say I am looking at C 3 the value of C 3 will be changed as C 3 minus that ratio multiplied by C 2 and that is what this expression says and you do this all the way up to the last row which is in our reduced matrix it is the n minus 1st grid point. So, having done this forward elimination we get to what is called as an upper bi-diagonal matrix and then once the upper bi-diagonal matrix is concerned or constructed let us see how will it look. So, I am still going to call the changed values of the elements by their original symbols only. So, B A etcetera everything else will be 0 here and 0 here remember that we have eliminated the bottom or the lower diagonal with respect to the main diagonal and made it into 0 the matrix of the unknown values is let me use it our reduced matrix. So, u 2 u 3 all the way up to u n minus 1 and the changed right hand side also according to the forward elimination process I will simply write C 2 C 3 C n minus 1 where this is also changed according to the expression that that we just saw as the second expression in the forward elimination process. So, now what you see is that if you if you look at this last row everything here is 0 this entire set of values are 0 except the last element which is the corner element left. So, if you multiply the last row by the column what will be left is simply B multiplied by u n minus 1 will be equal to C n minus 1 and therefore, you can immediately find out the value of u n minus 1 equal to C n minus 1 divided by B of course, this C n minus 1 and B are changed from their original values through the process of forward elimination but they have been calculated and stored when we were carrying out our forward elimination. So, we are always using the latest values of the C and the B as far as we are going to do this back substitution process as it is called. So, once u of n minus 1 is known then we go back one row at a time and use the previously computed values such as this u n minus 1 to then calculate u n minus 2 then u n minus 3 and all the way up to back to u 2 and the way to sequentially calculate this set of values is given through the process through the expression that you see on the board right now which is what we call a back substitution process. So, going by what I just described starting from n minus 1 then n minus 2 all the way back to 2 you calculate the back substitution using this expression. What I want you to do is during the lab today you will have plenty of time actually and what you will do is you choose a couple of specific i values here. So, you can choose i equal to n minus 1 and make sure that you understand the expression that comes out of it then you choose i equal to n minus 2 and see what that means basically as you go backward from n minus 1 to n minus 2 to n minus 3 and so on you will realize that you will be requiring the values of the variable calculated earlier. So, when you are calculating the value of u at n minus 2 you would need the value of u at n minus 1 which has been already calculated and so on. So, the back substitution process works in the sense that you first calculate the u at n minus 1 then u at n minus 2 utilizing the value of u n minus 1 and so on in that in that manner it will it will keep back going back rather and finish the finish the process. So, as we as we may mentioned earlier this implicit method finally is unconditionally stable and there is no restriction on the time step. In the lab today you will actually work out the solution of this problem using both the implicit method and the explicit method and you will try different time steps as part of the problem statement and you will see that even though the solution works always for the implicit method it may not give you very accurate results all the time. So, you still have to have reasonably refined time step to obtain fairly good accurate value of the solution. On the other hand large value of time step will not work at all with explicit method unless you maintain that time step less than that criterion given by the stability condition. So, this is this is the first problem that we wanted to discuss with this finite differencing approach again going back for a second this is our diffusion type equation and we have we have looked at the way this diffusion type equation can be solved using this finite difference approach in two manners one is what is called as explicit and the other one which is implicit will require solution of a system of linear algebraic equations and the most efficient algorithm in a tri diagonal system like this is what is called as the Thomas algorithm or the Gauss elimination for a tri diagonal system TDMA which is outlined on the board here. So, as far as the calculation process goes these are the statements that will be coded in the in the code the Sylab code. There are a bunch of things that you need to do before of course, you calculate the changes in the diagonal elements and etcetera and that is what I mentioned earlier namely you have to build this matrix. So, you have to generate the coefficients before you actually carry out the forward elimination and back substitution and in this case again the building the matrix that is generating these coefficients at the various locations within the matrix is very very easy and that requires to be done before you go ahead with the solution process. So, that is that is about the first problem. What I would like to do now is talk about a finite different solution for the second problem that we will take up in the lab and if you remember this was the last problem that we took as our exact solution situation this is a two dimensional steady state conduction problem and we had solved analytically there which finally gave us a series solution del square t equal to 0 in a two dimensional Cartesian system and this is what is called as a Laplace equation as some of you would definitely know. The specific problem that we had solved in case of the exact solutions is or was rather a rectangular domain where the length in the x direction was a length in the y direction was b the left edge the bottom edge and the right edge were maintained at temperature equal to 0 whereas, the top edge was maintained at temperature equal to t 1 and with this setting this del square t equal to 0 in the two dimensional x y space was solved analytically using that technique of separation of variables. We did not really go through each and every detail obviously in that technique of separation of variables I just basically gave you the series solution as the last step in that analysis. The purpose was really to use that series solution as a benchmarking solution when we are going to now solve it using a finite difference technique. So, this is what is called as a boundary value problem because remember this is a steady state situation. So, that we are not really time marching in this case that we start from a initial time condition as initial condition and march to reach here that is not how it is going to be solved it is a steady state problem and it is governed purely by the boundary conditions it is not really governed by the initial conditions at all because there are no initial conditions to talk about here. So, classically this is what is called as a BVP or a boundary value problem. So, let us see what we do here. So, just like what we did in the case of that one dimensional unsteady diffusion problem earlier in this case we create a grid is just that we have a two dimensional domain. So, the grid has two sets of lines you can say one is a x set of lines and one is a y set of lines and the simplest possible grid that you can think of is you generate these node points at a grid spacing of delta x let us say in the x direction and delta y in the y direction delta x need not be equal to delta y it can be it does not have to be. So, the way the grid is shown here is the left bottom corner is what is called as the grid point 1 comma 1 then you move along the bottom edge and reach x equal to a which is our end of the domain x domain. So, I have essentially taken m grid points along the x direction ok. Now, we keep on adding such m grid points by moving every time in the distance of delta y in doing. So, eventually we will reach the y equal to b location which is the top edge and what we are then assuming essentially that we are using n grid points in the y direction. So, that the grid point at the top left corner will be considered as 1 comma n and therefore, finally the last grid point which will be the right top grid point here where my highlighter is sitting is the m comma n grid point. So, the total number of grid points that we have here is m multiplied by n those many grid points have been generated in this grid or a mesh what I have also shown is a general grid point in the interior of the domain and we will simply call it i comma j. In the next slide we will see its neighbors and the way we will generate the expressions for the secondary values with respect to x and y with our standard Taylor series expansions. The boundary conditions are again shown here we had kept t equal to 0 on the left bottom and the right edge whereas, t equal to t 1 is maintained at the top edge. So, this is the the grid setting. So, to say remember we have m grid points in the x direction now n grid points in the y direction. So, that the total number of grid points in our calculation will be m multiplied by n going ahead and zooming on in some sense at the interior grid point i comma j what we see is that it has a right neighbor at a distance of delta x which is i plus 1 comma j a left neighbor again at a distance of delta x i minus 1 comma j and likewise a top neighbor and a bottom neighbor i j plus 1 and i j minus 1 e minus 1 e each at a distance of delta y from the grid point under consideration which is at the center this i comma j. And now if you remember going back to our previous problem we had we had there obtained using these Taylor series expansions an expression as an approximation for the second derivative of u with respect to y. So, there it was only a one dimensional domain now here we are talking about two dimensional domain, but the expression that we will be using for representing the second derivative will be exactly in the same once it will be utilized in the x direction where the x spacing will be used once it will be utilized in the y direction and y spacing will be used. So, going back to our problem at hand if you see at the bottom here I am talking about second derivative of temperature with respect to x now at the grid point under consideration which is shown by my highlighter. It is simply using the previously derived expression for the second derivative using Taylor series expansion grid point value to the right grid plus grid point value to the left these are the two values here i plus 1 comma j and i minus 1 comma j minus two times the grid point value at the grid point under consideration the whole thing divided by the grid spacing or the square of the grid spacing in the x direction. So, that is our representation for second derivative of T with respect to x at the grid point under consideration exactly the same manner you obtain d 2 t d y squared at i comma j by taking the appropriate neighbor values the top neighbor and the bottom neighbor along with the grid point value itself. Remember our governing equation is del squared T equal to 0 in the Cartesian coordinates. So, we have an expression to replace the second derivative of T with respect to x we have an expression to replace second derivative of T with respect to y. So, we now put it together and we convert our partial differential equation into a so called finite difference equation and I have actually what I have done here is I have put together these two expressions and then rearranged it finally, the rearranged expression in terms of the value of temperature at the grid point under consideration is expressed in terms of the values of the temperature at the four neighboring locations and the geometric parameter of the grid spacing which is the ratio of grid spacing in the x direction divided by ratio of grid spacing in the y direction delta x over delta y will be brought in. It is called beta in this case and as I said in general delta x need not be equal to delta y if it is equal to each other then beta will be simply equal to 1 and you can get a special case of the equation that you see on the screen right now. So, you can imagine that such an equation can be written for all grid points. So, each of these equations will include the neighboring grid points depending on the value of i and j that you are choosing eventually if you write out all these things if you want you will realize that this also results into a system of linear algebraic equations and it turns out although I am not showing it here is that this system of linear algebraic equations has what is called as a pentadigonal matrix. I really do not want to get into that detail and show the structure of that pentadigonal matrix. It is normally available in any standard finite difference in text and I have provided at least one or two in the beginning of our workshop and some of them actually write out the matrix for a typical grid of say 4 by 4 and then you can write out these equations explicitly for all grid points put them together in the form of a system of algebraic linear algebraic equations and you will realize that it ends up in a pentadigonal coefficient matrix. So, when it comes to such pentadigonal coefficient matrix which by the way is a result of our second order expressions that we are using to represent the second derivatives. The solution technique is slightly different than it is still remember a system of linear algebraic equations only thing is that the coefficient matrix now is a different matrix. In the sense that earlier we have seen a tridiagonal matrix where there was only a non-zero set of elements along the main diagonal and the two diagonals surrounding the main diagonal the off main diagonal. Now here there are going to be five diagonals where there will be non-zero elements. So, it is a useful exercise if you want to do it on your own in leisure that write down this equation explicitly completely that is for a grid of 4 by 4. So, that you will generate let us say 16 equations and see how they look when you put together in the form of a coefficient matrix times the column vector of unknowns equal to a right hand side sort of a situation. The technique that is utilized to solve such a pentadigonal coefficient matrix situation is what is called as an iterative method. So, the way the iterative method works is that we initiate the solution as some guess value for the dependent variable. In this case it is the temperature as our first guess let us say and then we utilize the equation at the top which is boxed to update the value from the initial guess that we are starting with. So, that is what is written out here start with an initial guess for the dependent variable t here in this case and keep on updating the guess solution in some sense using the expression that is written out here. So, then the question really is that why should anything change? If you see the way the solution will be updated at a given grid point t i comma j will be only if the values of its four neighbors will be different as you go from one guess to the next guess and the only reason it will happen is because if you go back to my initial grid and think about a point somewhere here or here you will realize that its neighbors will be coming from the boundary conditions. And therefore, as you keep on updating the value of the solution at a grid point such as this the value of the variable which is known at the boundaries will be utilized and this is what we call bringing in the information from the boundaries into the domain. And that is how the solution will get updated because the boundary condition information will be sequentially brought in as you go from one iteration to the next iteration. So, what I mean by one iteration to the next iteration is you call iteration 0 as setting up an initial guess in the domain iteration number 1 will be updating that initial guess with an equation that we are seeing here at the top visiting each grid point one by one and then updating the value of the guess that we had at that grid point using the neighbor values for that grid point. Some of the neighbors may turn out to be boundary values for at least some grid points and thereby the information will be updated and eventually the entire information will be brought into the domain. So, this is the way the iteration technique works and I am going to show it in a next slide with a picture also, but before that let us denote that k and k plus 1 be our successive iteration levels. Using this terminology what we have is that two basic iteration algorithms that we will employ are what are called as the Jacobi iteration algorithm where the value of the variable at a grid point let us say i comma j at the next iteration will be given in terms of the value of the four neighbors that we have available with us at the present iteration and that is how you will keep updating. Clearly some of these will be boundary points when the appropriate grid point is chosen and that is the way you will keep updating it. So, here in the case of a Jacobi iteration algorithm we in fact have two different arrays one array of the solution will be corresponding to iteration level k and there will be a one different array which will correspond to iteration level k plus 1 and we maintain actually both arrays in the storage. So, that at any point in time there will be two arrays getting stored one will be for the present iteration level k and there will be another one which will be for the next iteration level k plus 1. So, the way this works then is that the entire set of neighboring values that is getting used to update the value at the grid point of interest is coming from the present I should say previous actually since we have decided to use k and k plus 1 as the successive iterations let me say that k is the previous iteration value and k plus 1 is the present iteration value. So, from previous iteration value we take the entire solution at the appropriate neighbors which are the two neighbors on right and top and two on the left and bottom one each rather and update the value of the variable at the grid point i comma j. An immediate improvement of this Jacobi algorithm is what is called as the point by point Gauss-Seidel iteration algorithm in which we do not really have two arrays as such we have only one array and we keep on updating the solution in the sense that the left neighbor and the bottom neighbor in our iteration equation or the updation equation will be taken from the present value itself. And let us see why this works on the white board. So, what I have shown is part of the domain which shows the bottom sorry bottom edge which is the bottom boundary and the left edge which is the left boundary. And therefore, points grid points 1 comma 1, 2 comma 1, 3 comma 1, 4 comma 1 etcetera they are all on the boundary. And therefore, the value at least in our present situation that we are dealing with where the temperature values are specified on the boundaries. These will be all known since these are boundary values. What we have here is 1 comma 2 on the left boundary here 1 comma 3 etcetera these are also again boundary points on the left boundary and therefore, their information will be known. Now, if you see let me draw this little better or write this little better. So, this 2 comma 2 if you look at the point grid point 2 comma 2 I will actually redraw it again on a slightly bigger scale right. So, now, looking at the grid point 2 comma 2 when we want to update the value at this 2 comma 2 we are going to utilize the value from here the value from here value from here and also the value from here which is 2 comma 3. So, we will utilize the values from the 4 grid points that I have shown in crosses. So, as far as grid point 2 comma 2 is concerned the 2 neighbors that it has are boundary points which are going to have known values as part of the boundary conditions and using these 4 the value here will be updated. Now, in the implementation loop you go one by one and visit these grid points. So, having visited the point 2 comma 2 and updated its solution what you will do is then you will move on to the grid point 3 comma 2. When it comes to the updation of the value at grid point 3 comma 2 the 4 neighbors that you will utilize are going to be 3 comma 1 here then this will be 4 comma 2 this will be 3 comma 3 and this will be 2 comma 2. So, these 4 guys will be utilized to update the value of the grid point 3 comma 2. Now you realize that while moving we have already visited and updated the value of the variable at 2 comma 2. So, we have the updated value of the variable available with us while we are sweeping the entire domain in the current iteration. Since we have the updated value of 2 comma 2 point here we utilize it immediately to update the value of 3 comma 2. In the case of the Jacobi algorithm the immediately available or the latest available value at 2 comma 2 is not used. The previous iteration value that is available at 2 comma 2 is utilized to update the 3 comma 2 value and therefore, the rate at which the solution proceeds to convergence in case of Jacobi iteration is slower than or lower I should say than the rate at which the solution proceeds to conversion in the case of the Gauss-Seidel iteration. So, you will see that as we are going from left to right at any particular while location and then we will move on in the while location and again we will visit the grid points from left to right. At any particular grid point that you can think of let me just mark it as 3 comma 3 you will realize that as you are sweeping you have already updated the value of its left neighbor and the bottom neighbor before. So, the left neighbor and the bottom neighbor are already available as the latest updated values within the present iteration sweep and the Gauss-Seidel algorithm utilizes the latest available values in the present iteration sweep from the left neighbor and the right neighbor when it comes of the bottom neighbor I am sorry when it comes to updating the value of grid point at under consideration. So, the 4 values that will be utilized to update the value of a given grid point will have the latest available values which is going to be from the left neighbor and the bottom neighbor and the previous iteration values which is going to be from the right neighbor and the top neighbor which I am not showing here. So, that is the way the Gauss-Seidel algorithm works in case of Jacobi algorithm all 4 neighbors are taken from their previous iteration value and that is what I have shown in this case that the lower neighbor and the left neighbor are already updated because we have visited them as we are sweeping in the domain and updated their values in the present iteration level. So, it is found that typically the Gauss-Seidel algorithm converges much faster than the Jacobi algorithm does. However, the Jacobi algorithm is the most simple one and easy to program in some sense not that the Gauss-Seidel is any different or difficult, but it is a significant improvement the Gauss-Seidel over the Jacobi in terms of convergence. Finally, what I want to introduce is an idea of what is called as a relaxation in the iterative solution process. So, to sort of understand what is meant by this relaxation process we simply write the value of the temperature let us say in this case at the k plus first iteration as the value of the temperature at the k th iteration level plus a difference that you are obtaining in the value of t when you go from k th iteration level to k plus first iteration level. So, in that sense you can say that the right hand side is simply written as t k plus 1 that is the iteration level and you add and subtract the value of temperature at the k th iteration level. So, in that sense nothing is different from the left hand side and right hand side the only thing that the right hand side can be then interpreted as the value of t available at the k th iteration level plus the change in the value of t that is occurring between the k th and the k plus first iteration level and that you add to the value of the temperature in the k th iteration level. So, the idea is that this equation is to be interpreted as the value of t available in the k th iteration level plus the change in the value of t that you are obtaining through the k to k plus first iteration level and thereby you obtain the value of t in the k plus first iteration level. So, even the algebraically this is meaning less physically this is how you can interpret. What we write then is that let the change that we are obtaining in the value of t from k th to k plus first iteration we will add a certain percentage of that to the value of t that was available to us in the k th iteration level. So, that percentage in some sense or the fraction of that change is given by this multiplying factor omega. So, this equation number one which now you see on the board in words you can interpret as the solution after the k plus first iteration level which is on the left hand side equal to the solution after the k th iteration level which is the first term on the right hand side plus omega which is some fraction times the change in the solution during the k plus first iteration which is the t k plus 1 minus t k inside the bracket. So, we moderate the amount of change in the change that is happening in the variable t by the factor omega and add that moderated change into our previously available value of t in the iteration level k. Now, if you if you go back a couple of slides let us say we choose our Jacobi iteration algorithm expression. So, what we have there is t in the k plus first iteration level at the grid point i j is expressed as t at the four neighboring grid points at the k th previous iteration level. So, simply take this expression and substitute it where you see this t k plus 1 inside. So, I have I have repeated the Jacobi step here as the top equation and then using this basic iteration Jacobi step we express our equation number one written on the previous slide as t i j at the k plus 1 iteration level equal to t i j at the k th iteration level plus omega no change. So, far whole thing multiplied by the first term on first term inside the bracket which was this t i j k plus 1 is now getting replaced with the Jacobi iteration expression. And finally, if you then rearrange you will obtain a form at the at the bottom of your screen which is essentially what is called as a relaxation form of the Jacobi iteration algorithm. So, you can choose to decide how much change that is occurring through the k to k plus first iteration you want to add to the value that was available to you after the k th iteration level. So, that how much change is represented by this parameter omega and so, if we say omega is equal to 1 then we are adding the entire change is what we will say. If omega is equal to 0.1 we will say that only 10 percent of the change that you are that you are achieving through the iteration level k to k plus 1 is what you are allowing to be added to the solution that was available at the end of k th iteration. If omega is 1.5 you will say that 150 percent of the change is then added into the previously available iteration level solution. So, finally, at the bottom of the slide here you will see that the entire iteration form is written in the relaxation form I should say which involves is 1 minus omega multiplying the value of t in the k th iteration level plus this omega over 1 plus 2 beta square etcetera everything else here is at the previous iteration level. So, this is the most basic idea in the in the process of iterations that we are we are obtaining a certain amount of change in the in the value of variables when we go from iteration level k to iteration level k plus 1. If we add the entire change that is fine, but then we can choose to add only a fraction of that entire change or even a more than 100 percent of that entire change and obtain the value of the temperature in the next iteration level. So, usually you will see that this successive iteration sorry successive relaxation algorithm as we call it is utilized rather than using with the Jacobi step it is normally utilized with the Gauss Seidel. So, the only difference is that the left neighbor and the bottom neighbor in in our iteration expression are from the latest available value which is the present or the current iteration that is going on. The reason is because we already have updated these two values and we can use them everything else in the relaxation form is exactly the same. The parameter omega is what we are calling the relaxation parameter again if you want to interpret that on any sort of a physical basis you can simply imagine that this parameter tells us how much of the change that we are carrying out in the value of t in this case within a iteration level k to k plus 1 how much of that change we are adding to the value of t that was available at the end of the k-th iteration to obtain the value of t at the end of k plus first iteration. So, it can be 100 percent, it can be 20 percent, it can be 150 percent and and so on. If we if it turns out to be less than 100 percent we call it under relaxation which will correspond to the value of omega between 0 and 1 and in general this slows down the convergence of iterations. So, eventually we are going to converge to the the correct solution and we will reach it over a larger number of iterations if we are using an under relaxation factor which will mean that omega is less than 1. If omega is equal to 1 then we just recover our standard point by point Gauss-Seidel algorithm if you look at the boxed equation that that is projected right now on on screen. If on the other hand omega is greater than 1 we call it an over relaxation process and for some problems which are relatively straight forward and easy such as the one that we are actually talking about right now steady state temperature determination in a rectangular domain. You will see that over relaxation can actually speed up the convergence to solution convergence of the solution. However, there is usually some sort of an optimum value that people people find out that to be used in order to obtain the the maximum speed in some sense and there is no hard and fast rule what what way you can determine this value of over relaxation parameter which is greater than 1 for omega. So, what we are going to do in the lab is we are going to simply try out different values for omega just to get the idea that if you choose omega equal to less than 1 versus if you choose omega equal to 1 versus if you choose omega equal to say 1.5 or 1.6 or whatever what happens in terms of the number of iterations required to reach a converged solution. So, at this point since we are just about introducing these these ideas you simply note down that usually when we are doing over relaxation we employ the value of omega between 1 and 2 and I think for this particular problem if I remember the optimum value is somewhere around 1.7 to to reach it in the maximum time I think and there is no hard and fast rule why it is in this fashion and that fashion it is a totally problem dependent situation in in CFD you will realize as you start learning more that many times we figure out such conditions using numerical experimentation which means that essentially you just try out a few things and realize that for a given problem a particular setting seems to work the best.