 Welcome to this session on solving simultaneous equations. Simultaneous equations arise on many occasions in mathematical and scientific analysis. In order to represent these equations we will use multidimensional arrays. But first let us quickly look at what we have studied already. We know that an array handles a set of values of the same type declared with a type and size and the elements of an array can be accessed by using index expressions which must evaluate to a value between 0 and size minus 1. We now look at multidimensional arrays also called matrices. A matrix is actually a two dimensional array. We can in fact declare in C plus plus arrays with more than one dimension. A two dimensional array would be declared in this fashion in A 50 40. Note that two dimensions are independently defined in terms of their sizes. So, this effectively means that A is an array containing integer elements. It has 50 rows and 40 columns. Each element of this two dimensional array is accessed by a reference which will now require two index expressions. So, for example, A i comma j will mean i th row and j th column element. Suppose its value is 3782 then the i th row j th column element will have will be assigned this value. We note that row index i can have a value from 0 to 49 because the size of the first dimension is 50. Similarly, the column index j can have a value from 0 to 39 because the column index can vary between 0 to 40 minus 1 which is 39. Naturally, all rules for index expression apply to index for each dimension. We use these two dimensional arrays now to represent simultaneous equations. To begin with, let us look at some simple algebraic equations. All of you would have solved them in your high school algebra. Consider 2 x plus 4 y equal to 8 as equation 1 and 4 x plus 3 y equal to 1 as equation 2. The objective is to solve this system of linear equations and find out the value of x and y. To begin with, we look at how we can represent these two equations by using a matrix. Suppose we choose to put the coefficients of x and y in both these equations in a matrix in this fashion. The first row represents the coefficients of x and y in the first equation. Second row represents the questions in the second equation. We then write a vector or as one dimensional array x and y and this is equal to the right hand side which happens to be 8 in one. In fact, those of you who recall the matrix operations will know that this effectively means 2 multiplied by x plus 4 multiplied by y equal to 8 and 4 multiplied by x plus 3 multiplied by y equal to 1. We all know how to solve this, but to consider a specific method which actually generalizes and systematizes the normal solution process is known as the Gaussian elimination method. This method uses two important properties of linear algebraic systems to solve such equations. The first, any system of equations is not affected if an equation is multiplied by a constant. What it means is, if I have 2 x plus 4 y equal to 8 as an equation and if I multiply it by any value, say 0.5 to get 1 x plus 2 y equal to 4 as equation 1 prime, both these equations actually mean the same thing and therefore, the total system of two equations does not change at all. Observe that 0.5 is not an arbitrary number. We have chosen it such that the coefficient of x now becomes 1. We shall comment on it as we proceed. With this change, we now have the same system of equations, but in this form 1 x plus 2 y equal to 4 as equation 1 prime and 4 x plus 3 y equal to 1 as equation 2. We again represent these equations in the form of matrices that we saw. So, this system naturally translates into a two dimensional matrix having 1, 2, 4, 3 as elements, x and y as a vector and 4 and 1 representing the right hand side. The second important property that we utilize to implement the Gaussian elimination algorithm is, if an equation is replaced by a linear combination of itself and any other row, the system of equations remains same. So, basically we are trying to transform the system into an equivalent system, but the new system should be more amenable to find the final solution. In terms of practical usage, we can multiply equation 1 prime by 4 to get 4 x plus 8 y equal to 16. Now, if we subtract it from equation 2 to get a new equation 2 dash, let us look at this operation. Equation 2 was originally 4 x plus 3 y equal to 1. After multiplying equation 1 prime by 4, we have got 4 x plus 8 y equal to 16, which we subtract from equation 2 and get a new equation 2 prime, which happens to be 0 x minus 5 y equal to minus 15. Notice the importance of the choice of 4 as a multiplying factor and the operation of subtraction, so that we get the coefficient of x as 0. If we now write the two new equations that we have, which represent the same system, we have 1 x plus 2 y equal to 4, 0 x minus 5 y equal to minus 15, which is represented by this matrix, this vector x y and 4 n minus 15 as right hand side. Observe that this matrix has undergone change, so has the right hand side. Also observe that the matrix has undergone very important change. You have 1 on this diagonal here and you have 0 here. Let us look at further operations. If we multiply equation 2 prime by minus 0.2 or equivalently divide the whole thing by minus 5, we will get 1 x plus 2 y equal to 4, 0 x plus 1 y equal to 3. Notice that we are following the same rules and therefore, this system is same as the previous system. But now the matrix is 1, 2, 0, 1 and the right hand side is now 4, 3. We call them equation 1 prime and equation 2 prime. Now you will observe the importance of all the operations that we have done. Notice that these two equations can be looked backwards. First we look at the last equation, 0 into x plus 1 into y equal to 3. This directly gives us the value of y as 3. Next we use this value of y equal to 3 and substitute it in this equation. 1 into x plus 2 into 3 equal to 4, this is called the back substitution. This gives us x plus 6 equal to 4 which in turn gives us the solution x equal to minus 2. Notice that having transformed the matrix by such permissible operations into a form where we have 1 on the diagonal, 0s below the diagonal and some numerical value above the diagonal reduces the system of equations to a form where we get the solution for the last variable directly and by back substituting we can successfully find the previous variables. The essence of this method is to reduce the coefficient matrix to what is called as an upper triangular matrix. That means all elements on the diagonal are 1 and there are non-zero elements on the higher side of the diagonal. Then we use the back substituting. This process is of course susceptible to round off errors when your system is very large. There are other variations which tackle such problems. We will of course not go into those details but these are just the names such as Gauss, Jordan elimination, use of pivoting, l u d composition etcetera etcetera. In fact, there is a huge amount of literature with lots of useful algorithms which solve these and similar computational problems involving systems of equations. A very useful reference is the numerical recipes in C plus plus written by these four stalwarts that also written similar numerical recipes in C and Fortran earlier. Coming back to generalized simultaneous equations, a system of linear equations in n variables we now can represent by the following matrices. This is one matrix which is an n by n matrix. The coefficients in the C plus plus style are a 0 0 a 0 1 a 0 2 2 a 0 n minus 1 a 1 1 1 etcetera etcetera and the variables are named x 0 x 1 x 2 x 3 x n minus 1 which are naturally elements of a one dimensional array containing an unknown variable. This represents the right hand side which are standard numerical values again making a one dimensional array. The Gaussian elimination technique will essentially reduce the coefficient matrix to an upper triangular form that means we will have 1 1 1 1 on the diagonal. We will have all 0's on the left bottom corner and we will have non-zero numerical values which will represent the modified system of the equation which represents the same system anyway because we would be following the rules which are prescribed. Remember that the right hand side b 0 b 1 etcetera etcetera will not be original values but these will also be now modified based on the computations that are performed in order to do this transformation. But once we obtain this then it is easy to see that the last variable can directly be determined by using b n minus 1 and by back substitution successively we can actually determine all the other unknown variables. In summary then in this session we have examined the properties of a system of simultaneous equations in many variables. We have understood how matrices can be used to represent such systems and in particular we have studied the Gauss elimination method of reducing the coefficient matrix to an upper triangular form so that the system can be solved very easily. In the next session we will attempt to write a program to implement the Gaussian elimination technique. Thank you.