 Now, that we have seen various formulations of these static inverse problems, we would like to be able to develop numerical algorithms to solve some of the problems that we have already formulated. For example, the linear least square problem, the solution process is reduced to one of solving a linear system of equation where the matrix of the linear system is symmetric and pass definite. So, it behooves us to ask a question, how do I do that step especially when systems of large. So, this calls for actual numerical algorithms that we can use to get to the ultimate step of being able to compute the solutions. And to see the bridge from problem formulation to actual numerical algorithms to solve the problem in this module which we call as an interlude, we are going to talk about an assessment of what we have done and the way forward. So, the least square formulations can be classified in many ways, linear non-linear, ordinary or weighted, orthogonal or oblique projection, full rank versus rank deficient, over determined or under determined, offline versus online formulation. These are the examples of formulations of the least square problem we have covered thus far. Now, the pathways to the solution, we have to convert that mathematical formulation to actual numerical computation leading to the numerical methods for solving the problem. There are two ways to approach this, one is to directly minimize FFX. This leads to the so-called iterative minimization algorithms. We are going to talk about these iterative minimization algorithms as a part of the module 4.3 coming attractions. An alternate way would be we want to be able to minimize FFX, we compute the gradient of FFX, we compute the Hessian of FFX, we solve the equation FFX is equal to 0 and verify at the solution the system is symmetric and positive definite. That leads to solving linear systems of equations. These linear systems of equation solution process leads to a variety of matrix methods and these are covered in module 4.2. So, in this module 4.1, we provide a global view of what we have done and where we need to go and the two pathways, the two pathways to achieving the goal. In addition to achieving the goal of solving the static inverse problem, the methods that we are going to be looking at in this module are useful throughout the course whether it is a 3D war problem or a 4D war problem. So, these methods are the workhorse, the iterative methods and matrix methods are the workhorse that underlie computation of solutions of inverse problems or data simulation problems in particular. So, in the linear case offline ordinary linear full rank formulation in module 3.1, we were called upon to solve this system H transpose H is equal to H transpose Z. In here H transpose H is a symmetric positive definite matrix, M is greater than n that is the over determined case. In the under determined case when M is less than n, H H transpose Y is equal to Z you solve for Y and then the least square solution is obtained by H transpose Y. Here again H H transpose these two matrices are the Gramian matrices which we know the Gramian is full rank when H is of full rank the Gramian is symmetric it is positive definite. So, in these two cases we are called upon to solve linear systems with symmetric positive definite matrices H H transpose H transpose H. In the offline weighted linear full rank formulation as we saw in module 3.1, we are called upon to solve a system with this matrix and the right hand side. Here H transpose W H is a symmetric positive definite matrix. In the under determined case again we are going to have to solve this problem and compute the weighted solution like this. Again in these cases we are called to solve a linear system of the kind A X is equal to B where A is symmetric and positive definite. In the offline ordinary linear rank deficient formulation we are trying to provide a summary of everything we have done. We are called upon to solve H transpose H plus alpha I X is equal to H transpose Z. This is a symmetric positive definite matrix. These kinds of formulation arose for the rank deficient or the ill condition problem. These methods arose out of Tyknoff regularization. This is for the over determined case. This is for the under determined case. For the under determined case again we solve a linear system of this type. For the online ordinary linear full rank the emphasis is online. We saw it is not module 10. I think we have to correct that a different module. We will give the number soon. The equations were given by this. Here again K m is literally calculated like this. Here again I am interested in computing the inverses of certain matrices. K m is up to K m plus 1 is obtained by the Sherman-Marrison-Berry formula. We may remember this. So we have covered offline problems as well as online problems and we have summarized all the equations solutions of these equations give raise to the least square solution of the least square problem. For the offline ordinary non-linear case we have to solve a set of non-linear equations. We have to solve a set of non-linear equations. The non-linear equation is solved by setting the gradient equal to 0. In the online ordinary non-linear case again we are going to be solving these kinds of equations. This is for the first order case. In the second order case again we are going to be solving systems where the system matrix is given by this. The system matrix is given by that. The system matrix is a large matrix. The expression for it is large. We have to solve this system matrix. So you can see no matter what formulation we have used, the method of normal equations at its core calls for solving linear systems of the form A x is equal to b where A is a symmetric positive independent matrix. So that is the bottom line. No matter where you start offline online, linear non-linear, well conditioned, ill conditioned, in all these cases all these different formulations all lead to from a mathematical perspective that in solving one simple problem A x is equal to b where the matrix A is not any matrix, it is a symmetric positive independent matrix. A standard method for solving this linear system with symmetric positive matrix is called Choloski decomposition method. We will talk about this method in module 4.2. Again we are trying to build a bridge between what had happened and what is likely to happen and why and how what happened and what is supposed to happen are interrelated with each other. Most of the methods for solving linear systems they all rest on what is called decomposition techniques. So in the Choloski decomposition what is that first you need to do? You need to be able to compute the matrix A. A in general is given by H transpose H or H transpose. So in using Choloski method for solving this we first have to multiply H with H transpose or H transpose with H to obtain A and then we need to do a decomposition method. We will see some of the details in module 4.2. An alternate way would be instead of first composing H transpose H and then decomposing using Choloski method. So you first wind and then unwind. So instead of multiplying H with H transpose and H transpose with H and then applying Choloski for H transpose H or H H transpose we can directly decompose H. We can directly decompose H to simplify the form of the least square solution. There are two methods we will indicate. These are called QR decomposition and SVD. SVD stands for singular value decomposition, QR decomposition, Choloski decomposition. These are the three popular methods which we have come to call matrix methods for solving the resulting system of A x is equal to b where A is a symmetric matrix. Since our approach is quite mathematical since our goal is to provide all the mathematical basis instead of simply saying use QR, use SVD, use Choloski we are going to also fill up the blanks as to how Choloski works, how QR works, how SVD works because if you understand some of the intricate details of these methodology then you will be able to exploit that knowledge to be able to accelerate convergence in solving specific problems that is the goal. The alternate method would be instead of solving the gradient equal to 0 and resulting linear equation we could have directly minimized f of x which is the square of the sum of the residuals f of x. I could directly minimize it using iterative methods. Some of the well known methods for iterative minimizations are called gradient methods, conjugate gradient methods and quasi Newton methods. We are going to provide some overview of the workings of these methods as well. These methods become integral part of the data simulation process in fact anybody who is interested in trying to develop a data simulation system have to program many of these methods one or two of these methods to be able to bring the mathematical formulations to the computational domain and that is where these methods are very useful. So in summary in this module we have provided a quick overview of all the results we have done so far in all the previous modules. These are summaries of chapters 5 through 7 in our textbook Lewis, Lakshmi and Raghun and Dahl 2006. With this as a background with this as a bridge between the previous modules and the coming attractions. Now I am going to get into the nitty-gritty details of matrix methods as well as direct minimization techniques for minimizing f of x. And these two classes of methods constitute the basic workhorse of the data simulation process. With this we conclude this module. Thank you.