 Welcome to the world of numerical analysis. I am Professor Sachin Patverdan from Department of Chemical Engineering at IIT Bombay and these are series of lectures delivered on advanced numerical analysis in NPTEL phase 2. So, this is my first lecture, this is an overview of the course and next one hour or so I am going to present a bird's eye view of what we are going to study in this course. World of numerical analysis is pretty involved and complex and probably some of you have already have some introduction to this. This is meant to be a course which is advanced course will introduce you many of the things in a different light and I hope it will help you throughout your academic career. So, let us begin our journey with the motivation. In chemical plants it can you know you have large number of interconnected units like heat exchangers reactors distillation columns and these days the chemical plants are very tightly integrated to achieve high energy efficiency or high material efficiency which makes it very complex to handle to operate to design and it is not possible to do it without doing mathematical modeling. So, design and operation of such complex plant is always challenging problem and mathematical modeling and simulation has become a very very handy tool very cost effective method of analyzing behavior of such plants. So, in a real design problem or a real operation problem we have to judiciously blend mathematical analysis with experiments. It is not possible to rely only on experiments it will be not correct to rely only on mathematical modeling what we have to do is to plan experiments very carefully using mathematical models. So, mathematical modeling has become a backbone of modern chemical engineering design and operation. Now, these models have to be solved either offline or online and when you have to solve this models under variety of conditions variety of problems you need to use numerical tools most often you cannot solve this problems analytically. So, numerical problems is at or numerical solutions is at the heart of mathematical modeling and simulation which is in effect used for designing and operating chemical plants. Now, what are the typical problems that we encounter let us look at some of the problems that chemical engineer would typically have to face when he goes to a chemical plant. Well, one problem is of course, a design problem you may have to design a new section of a plant or if you are part of a consulting firm which design chemical plants you have to design a new plant under you are given some desired product composition you are given some raw material availability and then you have to find out unit sizes you have to find out flow rates you have to find out operating conditions. So, coming up with a base design from which a mechanical engineer or other engineering departments can take over is what is the job of chemical engineer coming up with the basic flow sheet design. So, this normally involves models for different linear operations you have to connect all these models into a giant mathematical model into a big model which could be hundreds and thousands of equations they need to be solved under variety of conditions. So, this is one of the problems that you normally encounter the other problem could be that you are already employed in a plant and then you know you have to do process retrofitting. So, retrofitting involves improvement in the existing operating conditions. So, you have a plant which is operating and then some modifications are necessary because maybe the input conditions have changed maybe you know the feed quality has changed or you need to ramp up you need to operate the same plant at different conditions than what it was designed for because of the market conditions. So, retrofitting is another problem for an existing plant and a problem that always comes when you are operating plant is control or online optimization. So, dynamic behavior and operability analysis is integral part of operating any complex chemical plant you have to first of all you have to monitor and control the plant you have to make sure that it is operating safely. You may have to carry out hazard analysis conduct what if studies you may want to do online optimization run the plant in optimal way and all these exercises cannot be done without mathematical modeling and subsequently solving this mathematical models using numerical analysis. So, numerical analysis is at the heart of all these exercises that we have to undertake as a chemical engineer. Now, what are mathematical models? Mathematical models could be in different forms we have models that give insight into long term behavior. So, these are typically energy and material balances and we look at the steady state conditions in the design problems or retrofitting problems you might want to only restrict yourself to steady state models. That means we ignore the transient behavior or short term behavior whereas when you are studying operation of a plant when you are trying to control the plant you cannot ignore the dynamics. So, in that situation the short term behavior or the transients become very very important and then we have to we have to solve the mathematical models in time and possibly time and space. So, what kind of mathematical models that we are going to study what that we need in this particular course mostly these models are going to be coming from first principles or they are from or they are often called as mechanistic models or phenomenological models. So, these models come from you know mass balances component balances. So, this is something that you have been doing for as a as a you know in your courses in various courses in chemical engineering. So, this could be you know models are composed out of you know rate equations mass heat or momentum transfer there are constitutive equations then chemical reaction rate equations there could be equilibrium principles used while doing a modeling between different phases. Also you may have to use equations of state if for systems involve gases or multiple phases. So, the models that you actually use for doing this design operation dynamic simulation are quite complex they are constructed out of these fundamental concepts of energy mass material balances rate equations and equilibrium models. Well from a mathematical view point how do I classify these models well we can have variety of classifications, but one classification that is relevant to this course which will of course show up in a different way in terms of classes of model equations that we are investigate is into distributed parameter models and lump parameter models. So, by this classification you know we are looking at two classes one that deals with variation in time and space. So, distributed parameter models capture relationship between different variables not just in space, but also in time when I say space it could be in multiple dimensions not just single dimension. So, for example, Luxor reactor or a packed bed column or even a shell and tube exchanger can be modeled as a distributed parameter system. So, this will depend upon situation in some cases you might use very very simple lump parameter model for shell and tube exchanger, but there are situations where you may want to use more complex distributed parameter model. So, one class of models that we are going to encounter in this course are distributed parameter models. The other class of models which we very often study in chemical engineering are lump parameter models. For example, stirred tank reactors or many stage unit operations mixers. So, these are models with the with ignoring you know spatial variation and if necessary we only consider variation in time alone. So, if you are looking at transient behavior only time comes into picture, if you are looking at steady state behavior you may get only algebraic equations in this case. So, these are two broad classes of models that are encountered in chemical engineering and we are going to study these models we are going to study how to solve different subclasses belonging to this two broad classes of models. Well, if we examine from a mathematical view point what are the equation forms that we encounter when we are going to do this course. Well, when you do a course in mathematics or let us say courses in mathematics in your first or second year of engineering. We start looking at only abstract equation forms and it is important that you relate those abstract equation forms to what you see in the mathematical models. So, what a kind of equation forms that you commonly encountered in chemical engineering models. Well, one is linear algebraic equations where we study linear algebraic equations may be even before we enter an engineering program. What you study as you enter engineering program in chemical engineering is solving non-linear algebraic equations. So, very often we have to deal with a single variable or multivariable non-linear algebraic equations thermodynamic relationships for example, are many times non-linear equations. The other class of problems that you encounter in modeling chemical engineering unit operations is ordinary differential equations. Typically an initial value is given and then we are supposed to find a solution of first, second, third or higher order ordinary differential equation. The other class of problems that are encountered particularly in distributed parameter systems are differential equations with boundary value problems. You also may have equations which are differential and algebraic equations. So, differential algebraic systems DA is. So, this is differential algebraic systems are mixtures of algebraic and differential equations while ordinary differential equations boundary value problems are one in which boundary conditions are specified partially at one boundary and the remaining at the other boundary and we are expected to solve these kind of differential equations. So, these kind of problems typically arise while solving say plug flow reactor models or distributed parameter systems and other models that we often encounter while modeling chemical engineering unit operations are partially differential equations. So, these models they may not come in isolation in real problem when you are actually trying to solve a problem associated with a section of a chemical plant. You may get mixture of all of them not just one of them in isolation. Nevertheless, when we study these equation forms we often study them in isolation and then we understand how to attack more complex problems where a combination of these might be encountered. Now, how do you go about doing this? How do you go about studying these equation forms? Well, if you look at many of the approaches presented in textbooks are written for engineers a conventional approach is study numerical recipes for each type of equations. That means you start by saying well I am going to first look at linear algebraic equations and the tools for solving linear algebraic equations. Then we move on to say non-linear algebraic equations having studied linear and non-linear algebraic equations. We look at ordinary differential equations then initial value problems typically you begin with then you might want to move on to study ordinary differential equations with boundary values and then typically of course would end with set of partial differential equations. So, methods for partial differential equations methods for ordinary differential equations and so on. So, if we look at it from this view point one can get a view that there are separate methods for solving linear equations and for partial differential equations or boundary value problems, but this is not exactly so when you start looking at these methods from a different view point. So, in the conventional approach how do you where do you encounter all these applications? So, we after we have studied each one of these equation types that is numerical methods for solving each one of these equation types. Then in exercises or in the sample examples you will encounter real engineering problems or it could be you might come across some abstract problems in terms of some x, y, z variables which do not make physical sense. So, these problems are then used to form of the concept that you have studied for each equation type. In this course on advanced numerical analysis we are going to be different we are going to look at it in a completely different manner. So, what I am going to do is I am interested in understanding what are the fundamental steps involved in formulation of a numerical scheme and then how do you come up with a receipt or a solution approach to solve a particular problem. So, if you take a critical view point of all the methods then you come across certain threads which are common and from that you can actually build a different way of studying numerical analysis. So, what I am going to do here is you know look at two different steps separately. If you look at all the numerical methods that are used for solving you know different type of problems that are encountered and you know make analysis what kind of what is the first step and what is the second step. So, what do you what do you realize is that invariably a first step is you know model transformation. So, many times you have models that cannot be directly or many times you have mathematical problems that cannot be directly solved using existing methods when I mean to say that cannot be directly solved I mean to say that they cannot be analytically solved. If they cannot be analytically solved you have to construct approximation approximate solutions, but to construct approximate solutions you have to first convert a given problem into a computable form. A computable form is one to which known computation tools can be applied. Now, this problem transformation is carried out using tools or using approaches developed in approximation theory a well developed branch of applied mathematics. Approximation theory is used to transform a problem into computable form and then you actually use different tools to attack the transform problem and construct numerical solution. So, these tools are you know linear algebraic equation solver or non-linear algebraic equation solver it could be ordinary differential equation initial value problem solver or it could be a numerical optimization scheme. So, when you actually construct a receipt or when you construct a numerical scheme to solve a problem you first transform it into a form that can be dealt with that can be tackled with one of the standard tools and then you use one or more of these tools in combination to come up with a solution of the transform problem. So, this is if I just put this into a pictorial form then you have an original problem this original problem might be a partial differential equation then you take this original problem use principles developed in the approximation theory and transform it to what I have called here in a standard form. A standard form is what I mean by standard form here is a computable form. So, this original form might be a partial differential equation when I transform it it might turn out to be set of linear algebraic equations or set of non-linear algebraic equations. So, the original problem that you want to solve and the transform problem do not have same equation type you have a partial differential equation here you have set of non-linear algebraic equations here. So, to solve this non-linear algebraic equations you may have to use special tools that are developed for solving non-linear algebraic equations. These tools for solving non-linear algebraic equations in turn might use a linear algebraic equation solver. So, you know it is not that I am going to use just one tool. So, I am going to use multiple of these tool to attack this transform problem and then come up with a solution which is a numerical solution of my original problem. So, what we are going to study in this course is two steps well how do I take the original problem and transform it into a solvable form or a computable form. Now, this process actually in the conventional approach get mixed up with various you know receipts that are developed for specific equation types we are going to separate it and view it as a separate step. So, this means unlike the conventional approach I am not going to look at partial differential equation at the end of my course or boundary value problems at the end of my course. I will begin right in you know attacking these problems right in the beginning and we will just transform them into forms that can be solved using one or more of these standard tools. So, that is the approach that we want to take. So, what are the overall learning objectives for this numerical analysis course okay. Well I am assuming here that you have had some exposure to this numerical methods prior to doing this course well if you have not had does not matter. This course will give you you know from scratch a different view point of numerical analysis, but if you have had some prior experience with numerical analysis well it will enrich your understanding. So, the first thing that I want to do here is to clearly bring out the role of approximation theory in the process of developing a numerical receipt for solving an engineering problem. This word is you know I am deliberately using this word numerical receipt it is like you know at the end of the course you should realize that forming a numerical scheme is like cooking up some dish and you know if you know the basic ingredients you can actually combine and then come up with a particular dish. So, you often have to be a good cook to come up with a numerical receipt to solve the problem and to be a good cook you have to understand the foundations. So, the first step is you know problem transformation which is based on the approximation theory. The next step is of course solving it, but in solving it there are two aspects one is of course the algebraic aspect of the problem how do you actually write the algorithms and so on, but often there are very very interesting geometric ideas associated with these numerical schemes and if you get understanding of these geometric ideas if you understand you know if you can visualize some of these if you can use your you know power of visualization then actually that can help you to construct solutions much better. So, unlike a traditional course I would like to stress a lot on explaining many geometric ideas that are associated with development of numerical schemes. So, this will actually help in developing a deeper understanding of numerical recipes and finally, an aspect that we do not try to stress in a first course is analysis of conversions or convergence analysis of numerical methods or error analysis. There are also other analytical aspects that are associated with the numerical computations and I would like to stress these numerical these aspects along with the numerical aspects, convergence aspects along with the numerical aspects though we may not get too much deep into these, but we will nevertheless study this to some extent. So, that you have a taste of what goes in you know understanding the conversions behavior of these schemes. So, all these three aspects are very very important when it comes to coming up or when it comes to concocting a new numerical scheme. So, if you take a critical look at many many numerical schemes that are available in the most of the textbooks you will see that you know there are some fundamental two or three ideas that are used in developing the compatible forms. One of them one dominant idea that is you find in numerical methods is using Taylor series expansion. So, approximations carried around carried using Taylor series expansion is one dominant way of doing approximation. The other method or other approach that is used is polynomial interpolations and the third you know pillar of approximation a problem simplification problem discretization is least squares approximations. So, the problem transformation is carried out mainly using these three fundamental tools or fundamental ideas one is Taylor series expansion other is polynomial interpolation and least square approximation and we are going to study them pretty much in detail. So, as to understand their role in problem transformations then after that we are going to get in depth understanding of four different numerical tools well once you have transformed the problems there are variety of ways of attacking the problem to get a numerical solution. So, if you look at what are the tools available today well we can come up with five different classifications I have just mentioned four of them here one is linear algebraic equations other is non-linear algebraic equation ordinary differential equations initial value problem and numerical optimizations. So, I need these four tool kits with me to come up with a numerical scheme and then the fifth one which is not mentioned here or which is not going to be part of this course is stochastic methods that goes much beyond scope of this particular course and would probably need a separate course see how stochastic methods can be used to solve the transform problem. We are going to concentrate mainly on linear algebraic equations non-linear algebraic equations and ordinary differential equations initial value problems or ODIVP as they are known along our way we will also pick up fundamentals of numerical optimization I do not intend to have a separate module on numerical optimization, but will on our way we will pick up tools on numerical optimization. So, this course consist of six learning modules the first one actually here I am talking of the course ideally what it should consist of well I will then at the end of this slide I will tell you what I am going to lecture on, but this course initially should begin with relating abstract equation forms to process models. So, we if I am delivering this course to final year under graduate students I would spend first two or three lectures talking about different mathematical models that they have already studied and what abstract equation forms that arise from this mathematical models. The second module is going to be completely different from what you do in the conventional numerical methods or numerical analysis course. The few lectures the these few lectures are going to be devoted to fundamentals of vector spaces. Now, vector spaces we start studying vector spaces probably even before we enter our engineering programs. So, by the time we come into engineering programs we are familiar with three dimensional vector spaces and mostly mostly we continue using three dimensional vector spaces maybe you study you know different coordinate systems which probably you do not study when you are in your school, but more or less the idea of vector space remains confined to three dimensional vector spaces. But in mathematics in the field of functional analysis the idea of vector spaces has been very very profoundly developed into you know a rich concept where a large subset of objects can be looked upon as vector spaces and we are going to get some peak some you know understanding of these generalized vector spaces which are not just three dimensional vector spaces, but four five n dimensional or even infinite dimensional vector spaces. In fact, these vector spaces play a fundamental drone in formulation of or in understanding of numerical schemes and this is what I mean when I when I am saying that I want to stress upon geometric ideas. The geometric ideas that you understand in three dimensions can be extended to spaces of higher dimension and that is what we are going to have a peak at in the second module. The third module is going to be problem discretization using approximation theory. So, significant number of lectures are going to be devoted to problem transformations. So, here you know I will start with the models which could be a non-linear set of algebraic equations which could be a partial differential equation which could be a ordinary differential equation boundary value problem I am going to transform it into a computable form. So, unlike a conventional course where these PDEs or boundary value problems are discussed at the end will encounter them right in the beginning of this course and will transform to the computable forms. Once we have the standard computable forms which could be set of linear algebraic equations which could be set of non-linear algebraic equations or ordinary differential equations initial value problems then we need to know how to solve them. So, module four is going to look at variety of numerical tools for solving linear algebraic equations then we move on to tools for solving non-linear algebraic equations and finally we end with tools for solving ordinary differential equations initial value problems. So, ideally this course should consist of these six modules well, but when I am going to deliver these set of lectures I am assuming that you are already well familiar with different module forms that you encounter in chemical engineering. So, the module one though I have mentioned here I am not going to really start with module one my lectures will start with module two that is fundamentals of vector spaces. In the next few slides I will very briefly touch upon what should go into module one, but the second lecture onwards will start looking at vector spaces generalized vector spaces and what role they play in numerical analysis. So, moving on well how long will this journey be it is going to be a long journey we would need about forty eight lectures one hour lectures to understand these variety of aspects of numerical analysis. So, let me get into a little more details of module one. So, the module one will consist of abstract equation forms in process modeling. So, overall objective would be you know mathematical models in chemical engineering together with variety of design or operating conditions that you rise to different types of abstract equations or equation forms like ODEs like partial differential equations and. So, we must in the beginning associate abstract forms with the real problem because as we go along we just start looking at abstract forms we lose track of the engineering problems except when we look at some you know examples or when we look at or when we solve some exercises apart from that we lose connection with the engineering problems. So, in the beginning it is good to have a connection with these models and then we need to know which type of equation forms will be treated through the in this course. So, if you just want to have commonly encounter examples. So, linear algebraic equations where do you get linear algebraic equations in chemical engineering systems. So, many times we have to solve steady state material balance for a lump parameter model for a section of a plant and this will give rise to set of linear algebraic equations AX equal to B. Nonlinear algebraic equations of course you must have studied in your courses in your third year when you study you know mass transfer heat transfer courses or unit operation courses mainly where we encounter models which come through energy and material balance for one unit or a section of a plant which consists of multiple units and these give rise to nonlinear algebraic equations. Very often we have to solve problems using optimization tools for example estimating some rate parameters say reaction kinetics parameters or estimation of mass transfer or heat transfer correlations. So, these problems have to be solved using tools that are used for optimization numerical optimization. So, these are optimization based formulations and ordinary differential equations initial value problems arise when you start looking at control at dynamic simulation of a chemical plant or when you want to do hazard analysis using dynamics dynamic simulators. So, these problems in abstract terms are nothing but solving coupled ordinary differential equations subject to given initial conditions or given input scenarios then you may end up with not just differential equations you may end up with algebraic differential equations. Well common example is distillation columns where you have a phase equilibrium giving rise to algebraic equations which could be highly non-linear you have differential equations coming from dynamics on the trays temperature dynamics composition dynamics material balance on the trays. If you want to simulate the dynamic behavior not just do the design then you get differential algebraic equations coupled equations and these equations are notoriously difficult to solve then the differential equations alone or algebraic equations alone. So, these are the situations where you know these differential algebraic equations arise when you have phenomena which are operating at different time scales. So, some phenomena are fast some phenomena are slow and in such situations the slow phenomena you retain them as differential equations the fast phenomena you can neglect the derivatives and you know approximate those equations as algebraic equations and that gives rise to differential algebraic equations. If you want to do detail analysis of let us say some reactor fluxor reactor or a packed bed column then you do not have option but to use partial differential equations whereas when you are doing a very gross analysis in a taking at just as a unit in a plant and doing energy material balance you can probably neglect those variations but if you want to study one unit operation in detail you often have to use you know distributed parameter models that is partial differential equations. So, these partial differential equations arise when you are looking at packed bed columns fluxor reactors and so on. So, in the beginning it is good to make these associations to understand where these abstract equation forms arise but as I said my lectures are going to start from module 2 because these are meant for somewhat advanced users. In the final year of a chemical engineering undergraduate program or may be first year of a graduate program of chemical engineering well here we begin with fundamentals of vector spaces. So, what are the learning objectives? So, first thing is I would like to understand two fundamental operations vector addition and vector and scalar multiplication and see how these operations hold in any vector space what I mean by any vector space I am going to define sets which are called as vector spaces where these two operations hold and these sets are going to be other than the familiar three dimensional vector spaces. For example, I would introduce set of continuous functions over some domain say 0 to 1 or I might introduce I might start talking about a set of continuous functions over 0 to infinity. These kind of functions these kind of sets arise when we are solving differential equations partial differential equations and if you have understanding basic understanding or of the geometric understanding of these underlying spaces then it is much easier to develop the solutions for these kind of equations. So, we are going to look at these abstract notions of vector space and generalized vector spaces like function spaces. So, a vector in this vector space is a function. For example, you know set of all continuous functions over 0 to 2 pi and say sin x is a vector in this set or cos x or cos 2 x is a vector in this set of continuous functions. Well, another vector could be just a line a plus b t defined over 0 to 2 pi and so on or some polynomial defined over 0 to 2 pi. So, these sets are generalized sets not just three dimensional vector spaces that you are familiar with and what we will study in this particular module as how these sets to qualify to be called as vector spaces and how the geometric ideas that hold in three dimensions can be extended to these higher dimensional spaces. So, we will go on to generalize the concepts such as subspace such as linear dependence such as span of vectors what is the basis in a vector space and so on and we will examine examples of different sets that qualify to be vector spaces or that qualify to be subspace of a vector space and so on. So, this is beginning of the geometric generalization this grand geometric generalization was carried out probably 60, 70, 80, 100 years back in the domain of mathematics and if you have some idea about the generalizations then it becomes very, very easy to understand underlying foundations of numeric analysis. So, that is why first few lectures are going to be devoted to understanding these generalized sets. Well, when we work in a three dimensional vector space what are the things that you actually need? Well, we need when you work with vectors we need to know about length of the vector. So, when you move on to generalize vector spaces we define something called norm of a vector which could be viewed as a generalization of concept of length of a vector. So, we are going to distill out essential properties that define length in three dimensions and generalize them to this concept of norms. Is there a unique way of defining a norm? What we will find out is that a norm can be defined in multiple ways. The way we define the so called length in three dimensions is one way of defining norm it is a special case. Now, it is good to do visualizations in three dimensions or two dimensions one can do visualizations and maybe if you understand visualizations in two and three dimensions then you might be at least able to do some imagination and or extend your imagination to see what is happening in a higher dimensional space or a function space. So, that is what we are going to look at in this part. Well, when you are dealing with numerical analysis a thing that you have to invariably encounter is convergence of a numerical scheme. So, we have to understand whether a particular we start with a guess solution and we construct a new solution from a initial solution. So, whether this sequence of vectors that you get in the process of generating approximate solutions, is it converging to some point? Is it going to somewhere? Is it going somewhere in the same space? We need to examine this thoroughly when it comes to understanding numerical behavior of solutions. So, in abstract terms we are going to look at sequences of vectors and we also have to talk about conversions. In fact, when you have to talk of conversions of vectors, we have to talk of nearness of two vectors and if you want to talk of nearness of two vectors you have to find out distance between the two vectors and this is where the concept of norm becomes very vital. So, the ideas that you use in three dimensions need to be generalized to higher dimensions. Well, we will look at very briefly at the concept of a normed space that means a space on which a norm is defined. So, and we will also have you know we will also understand very briefly what are called as Bonac spaces or the complete normed spaces. Well, these spaces you may not encounter later the concept of Bonac space may not be required throughout the course, but it is good to have understanding of this idea when we start looking at when you start generalizing the concept of a vector space. The most important concept that we use in three dimensions when we do geometry in three dimensions is orthogonality. We like to work with orthogonal sets, we like to work with you know coordinate definitions which are orthogonal to each other, x, y, z or you know coordinates that we normally take are you know is a orthogonal coordinate system. So, orthogonality that is so useful in three dimensions is also useful when it goes to other spaces like function spaces. So, we need to generalize the concept of orthogonality to higher dimensional spaces to other spaces and this is done through what is called as inner products. So, we are going to define a special class of spaces vector spaces called as inner product spaces. That is a vector space a set of objects on which an inner product is defined. Well, when it comes to a three dimensional space all of you would be familiar with a dot product. So, when I am generalizing the idea of vector space from three dimensions to some other you know some other sets which are more general sets. I also would like to have ideas which are similar to a dot product. So, this inner product is going to give me something similar to a dot product. In fact, what we will see is that dot product is one way of defining a inner product. So, inner product is a generalization is a grand generalization which will help us to generalize the concepts of orthogonality orthogonal vectors in general spaces function spaces and so on. So, we are going to look at inner product spaces inner product spaces are generalization of three dimensional vector spaces with dot product defined in them. So, we are very familiar with dot product. We use dot product to define angle between two vectors in three dimensions and when we move on to more general spaces set of functions set of in the polynomials we need this concept we need something like dot product which is going to be this inner product in these spaces. So, we are going to look at variety of inner product spaces. So, there are different ways of defining inner product not that not only one way in three dimensional space you only know one way of defining an inner product. There are other ways of defining inner products and we will look at those different methods of defining inner products. Well, one of the fundamental equation that we use in three dimensions is that cos theta angle between any two vectors is dot product of two unit vectors in two directions. So, if I have a vector A and B I find out unit vectors in along A I find a unit vector along B and then I take a dot product which gives me cos theta between A and B a generalization of this particular concept in inner product spaces is nothing but the so called Cauchy Schwartz inequality. The name Cauchy Schwartz inequality might sound very intimidating but this is a very fundamental result in inner product spaces and it will help us to define angle between two vectors. So, here a vector as I said is going to be a function and then we need to talk about orthogonal functions. So, see you might have come across some statements in your undergraduate education saying that sin theta sin 2 theta sin 3 theta these are orthogonal to each other why they are orthogonal. So, if you understand the concept of inner products and inner product spaces this will no longer be a mystery. So, generalization of concept of angle between any two vectors is achieved through inner product and then the Cauchy Schwartz inequality is a fundamental inequality which is nothing but generalization of the fact that cos theta is dot product of two vectors through unit vectors in three dimensions. So, we are going to study this Cauchy Schwartz inequality then we will look at variety of orthogonal or orthonormal sets that are very often used in numeric analysis for example, Ligandre polynomial or Laguerre polynomial. Now, these names we encountered in math courses and often we do not know why they are called orthogonal sets or why they are called as orthogonal polynomials. If you start from fundamentals of vector spaces you will get in depth understanding as why these sets are called as orthogonal sets. Well, it is not always that you have a set of vectors which are orthogonal, but if you have a non orthogonal set of vectors then one can systematically construct a set of vectors that is orthogonal. For example, in three dimensions you may have come across this method called Gram-Schmidt orthogonalization which is you start with three vectors which are not orthogonal and starting from these vectors one can systematically construct three new vectors which are orthogonal to each other. So, constructing a orthogonal set from a non orthogonal set this process is called Gram-Schmidt orthogonalization and this we are going to study in a general inner product space. So, very useful to get again insight into how different orthogonal sets are developed and then we will look at examples of generating orthogonal states starting from non orthogonal sets. So, from inner product spaces we then move on to the third module. Now, this is going to be a very very important module in this course I would say heart of this course. How do you discretize the problem using approximation theory? So, as I told you in the beginning it is often not possible to solve a given problem in its original form. Most of the times the problem that you have is not a linear which means it could consist of non linear algebraic equations, non linear differential equations, non linear partial differential equations. Well, when you have linear differential equations, linear partial differential equations you can many times construct solutions analytically at least for some idealized situations. This becomes very very difficult even if they are slight non linearities and it may not be possible to have analytical solutions. This means you have to construct numerical solutions. To construct numerical solutions we have to first transform into standard forms. See this is because we do not have tools to solve all kinds of problems. We can only tackle certain types of equation forms. So, first step is to convert a given problem into a problem which can be tackled using standard tools and then we attack the problem to construct a solution. So, by hook or crook by some means by using multiple ideas together from approximation theory we actually transform the problem into a computable form. Is there a unique way of doing this? Obviously not. A given problem can be transformed into a computable form by variety of means and if you have to choose between different means of transformation you have to have in depth understanding of how these transformations are done. Why do you choose one over the other? Whether I should use Taylor series approximation or whether I should use interpolation? Unless you know the foundations it is difficult to make these choices. So, it is good to have a basis of foundation of approximation theory. This step of model transformation is often referred to as problem discretization and in this module in this set of lectures we are going to look at popular approaches that are available in the literature for approximations or approximate a given problem to computable forms. So, first thing that I want to do here before I begin this transformation is to show that actually different problems that you encounter in numeric analysis they are only seemingly different. Once you start viewing these problems from the view point of vector spaces generalized vector spaces they do not really appear different problems. One can come up with a grand generalization that there is a one single problem well in a particular vector space this problem will be called as set of algebraic equations. In a particular vector space in another kind of vector space a similar problem will be called as solving differential equations initial value problems. In some other vector space this problem will be called as a problem which is partial differential equation. So, if you understand this grand generalization very briefly then it helps us to develop discretization in a better way into the computable forms. So, basic problem you can show is that is nothing but operator operating on a vector giving another vector and there are three problems associated with these these this fundamental equation is either given the operator and a vector find the solution. So, given operator say t operating on a vector x find y the second problem that you encounter is given operator t and y find x that means I know the solution I want I know the effect I want to find out the cause. So, operator t when it operates on x gives me y I know y I know t I want to find out x these are called as inverse problems. The first problem where you look at or you are given operator and you are given x you find out y is called as direct problems our course is mostly going to be dealing with inverse problems that is given an operator operating on a vector and you are given y the effect then you want to find out cause that is x. Then you know we will look at specific tools that are used in problem approximation what it turns out is that the backbone of approximation is approximating a given function using set of polynomials. The fundamental theorem in approximation theory called as Weierstrass approximation theorem and this lays the foundation of all the problem discretization methods that are used in numerical analysis. So, this particular theorem stays that any continuous function over finite domain can be approximated with arbitrary degree of accuracy using set of polynomials. So, it does not tell you which polynomial to use it is it just tells you existence of such a polynomial approximation well it is up to us to construct the polynomial approximations. But the study of Weierstrass theorem very briefly will give you the foundation of how this whole business is done of approximating or how or transforming a problem original problem into a computable form. So, we will just very briefly look at Weierstrass approximation theorem and then we will one by one start looking at commonly used polynomial approximations. So, which is the most commonly used polynomial approximation as I said the most commonly is polynomial approximation is Taylor series approximation. So, this is used in variety of numerical tools for example, for solving or developing this method called method of finite difference method of finite difference is used for discretization of ordinary differential equations boundary value problems ODE BVP they get transformed into set of algebraic equations. These this method is also used for transforming partial differential equations into set of algebraic equations. So, we will also study this method in different context for example, you probably are familiar with Newton's method or sometimes called as Newton Raphson method for solving non-linear algebraic equations and again this method originates from Taylor series approximation that is approximating a non-linear differential equation or non-linear set of equations locally using Taylor series and then converting into a set of a sequence of linear algebraic equation problems. So, we will look at Taylor series approximation as a fundamental tool and how it is applied to do problem transformations. A variety of problem transformations transforming a partial differential equation transforming boundary value problem transforming set of non-linear algebraic equations then we continue our journey into other type of approximation. The second most important or not second most important equally important approximation is polynomial interpolations. So, in the beginning we will have a brief understanding of Lagrang interpolation well it is a vast area and then we cannot do justice to every aspect of interpolation. I am just going to give you a brief introduction to some important concepts. So, we will begin with Lagrang interpolation we will move on to piecewise polynomial approximations or interpolations or not approximations piecewise polynomial interpolations and then we will also look at not just polynomial interpolations we will also look at function interpolations. So, linearly independent functions are used to construct interpolating interpolating functions and then we will look at problem discretization using this approach. So, I am going to again look at a boundary value problem ordinary differential equation boundary value problem and discretize it using interpolation polynomials or I am going to discretize a boundary value a partial differential equation using interpolation polynomial. So, this is my next next task that is study how interpolation plays a role in problem discretization. In particular we are going to look at this method of orthogonal collocations which is which is very powerful method used in solving variety of chemical engineering problems and then have a brief probably look at orthogonal collocations on finite elements. So, the third important tool or third important approach that is used for problem discretization is v squares. So, we are going to study various ways of approximating problems using method of v squares. First we will develop analytical solution of linear v square problem. Look at its geometric interpretations this will give us insight that is very very valuable that can be you know extended with be understand approximations in higher dimensional spaces and then we will actually extend this idea to general spaces or general Hilbert spaces. So, the fundamental to this v square approximation is the idea of projections. Now, projections we normally study in engineering drawing or we study projections even starting at a school where you want to project find the nearest point in a plane from a given point outside the plane. So, projections are very very important and how do these idea of projections is used in the problem approximation is what we want to study next. So, we will also have a brief peak at function approximation based models and the formulation of the parameter estimation problem and in this before we move on to the main the remaining part that is understanding the tools. We will also look at v square problem is for linear in parameter models v square formulation for non-linear in parameter models. So, in particular we are going to look at a method called Gauss-Newton method. So, this Gauss-Newton method is a combination of v square and Taylor series. So, we look at this Taylor series approximation and v square approximation. So, we are going to look at this method and then finally move to problem transformations which we have been already looking at that is how do you transform a boundary value problem or how do you discretize a partial differential equation using method of these squares. So, these methods are known as method of minimum residual methods. So, a popular method in this class is Galerkin method and we will have we will study this method actually the discretization of ordinary differential equations boundary value problems or partial differential equations using v square approach leads to the so called finite element methods. We will not go in depth into this, but we will have a very brief introduction to what is this what is this animal finite element method using and how it is related to these square approximations. So, with this we will come to an end to of our module which talks about problem transformations. So, this will almost we come to half of the course. Now, what remains to be done is attack the problems which are transformed before that we will very briefly look at what are the errors that come up in problem transformations and what are the approximation errors and what is bearing on the solutions numerical solutions. So, after having done this after having transformed the problem now we begin our journey into tools. The first tool that we are going to look at is solving linear algebraic equations and here well you might wonder we have been solving linear algebraic equations since school days what is so new about it what am I going to learn about it may be you are already familiar with Gauss elimination and then in Gauss elimination you may have studied even some advanced things like when you know how to do pivoting and so on, but there is much more to linear equation solving than just Gauss elimination. There are many other methods there are iterative methods for solving linear algebraic equations and we are going to have look at them even optimization methods based methods or numerical optimization based methods are used to solve linear algebraic equations and will be studying those equations. But apart from studying this numerical scheme I am going to discuss one very important thing here that is matrix conditioning. Matrix conditioning talks about how well posed or how ill posed a given problem is a given set of linear equations are and then that gives you insight into behavior of the numerical solution. It may happen that you have a ill posed problem and then the solution that you compute numerically is not quite reliable. You should be able to differentiate between ill posed problem and not reliable solution and well posed problem, but mistake that you have made in computing the solution. So this is possible using the concept of condition numbers or matrix conditioning and we are going to have a look at these the concept of condition numbers as a part of this module. So we will begin with the study of conditions for existence of solutions for linear algebraic equations. We move on to the geometric interpretation of solutions very very important. So I will look at the problem through two pictures a row picture and a column picture. We will look at the solution from a two different view points geometric view points. We will interpret the what is the meaning of a singular matrix geometrically and here essentially in the beginning we will just have a some understanding of four fundamental subspaces associated with a matrix row space, column space, null space and left null space. So up to now we were not talking about any numerical scheme or solution scheme. We were talking about problem transformation and just now I started about solving linear algebraic equations, but even in the beginning I am talking about geometric ideas and now we will move into numerical schemes. This is the first time in this course will be encountering actual numerical schemes. So first of course I am going to look at Gaussian elimination very briefly and LUD composition and will spend some time on the number of computations that are required in carrying out a Gaussian elimination process and see whether you know there are methods that can even improve that can even reduce the number of computations. So the main focus in this part is going to be introduction to the iterative methods, but before that we will look at some special methods for solving linear algebraic equations and these are going to be called methods for sparse linear systems. So many problems have very nice structure, sparse systems are one in which lot of elements are zeros and there are only few non-zero elements in a big matrix. In solving problems which are large scale let us say you are doing simulation of a section of a plant you may have thousands of equations and when you actually starts solving them let us say by Newton's method you linearize them when you linearize them you get linear set of equations which are say 1000 cross 1000 or 10000 cross 10000, but this matrix which is 10000 cross 10000 may not be fully populated it will have many many zeros and it is possible to take advantage of this structure and then come up with special schemes. So these are called as schemes for sparse linear systems and we are going to look at just few of them it is a nice work and we can only touch the tip of the iceberg. So I am going to look at block diagonal matrices I am going to present the Thomas algorithm for the tridiagonal matrices and block tridiagonal matrices will look at triangular matrices and block triangular matrices but as I said this is only a brief introduction and we are going to move on to the iterative schemes. The main thing here is to familiarize you with the notion of sparse matrices and then maybe when you encounter them you will remember to use them in your application. The study of iterative solutions of a study of solving linear algebraic equations using iterative solution scheme is the next component that we will look at. So there are a variety of iterative schemes you start with a guess solution and then you iteratively refine the solution and finally you approach the true solution this is the iterative approach and this we are going to study different methods very popular methods in this category are Jacobi method or Gauss-Seidel method or the relaxation method. So we will study these methods their algorithms but more importantly we will study the conversions analysis of this iterative schemes I am going to spend quite a bit of time in understanding the conversions of these schemes. The question is if I start with a particular guess what is the guarantee that the solution iterative scheme will converge to the solution of solving linear algebraic equations. So that will be analyzed systematically using concept of Eigen values and we will see the rule of Eigen values in speed of conversions or the conversions itself and then we will look at some special form of matrices that enhance conversions. We then move on to optimization based schemes for solving linear algebraic equations. So here I am going to use a numerical optimization tool such as gradient search method or conjugate gradient method to solve set of linear algebraic equations that is solving ax equal to b is going to be done using optimization. It turns out that in many situations this can be a very fast tool particularly when you are solving large set of equations and in the end of this module I am going to understand I am going to present the concept of matrix conditioning or condition number of a matrix and its relationship with behavior of numerical solutions of linear algebraic equations. So we will end with a deeper understanding into how good or how bad a numerical solution is and we will associate that with the conditioning of the matrix. We then move on to the next tool. The next tool that I am going to study is going to be solving non-linear algebraic equation. So in this tool box well non-linear equations are more often encountered than the real equations. Most of the real engineering problems or real engineering models consist of non-linear coupled equations. You do not have them in single variables. You have multiple variables which are coupled which give rise to coupled non-linear algebraic equations. If you are modeling section of a plant and understanding the steady state behavior of energy and material balance it might be thousands of coupled non-linear algebraic equations that need to be solved simultaneously that is very, very important. In this method in this particular module we will look at variety of iterative methods that are used for solving non-linear algebraic equations. In the end we will also have a brief introduction to the convergence analysis of these methods based on a famous principle in functional analysis called as contraction mapping principle. So again this is just a brief introduction to let you to tell you that what goes in understanding the convergence analysis of the scheme. So we will begin with the method of successive substitutions. This is one of the very preliminary method which is used. These are derivative free methods. There are a variety of derivative free methods like Jacoby iterations or Gauss-Searles iterations or relaxation iterations. We will study these methods and then from this we will move on to derivative based iterative methods. The well known derivative based iterative methods are Newton's method. So we will first look at univariate Newton type methods where you find out the local derivatives either exactly or approximately. Then we will formulate a multivariate secant method which is an approximate derivative based method or popularly known as Wegstein iterations. Then we will move on to the well known Newton's method and look at its variations like damp Newton method where you can try to improve the convergence behavior or we will develop numerically more friendly versions of Newton's method which are called as quasi Newton methods or with rank 1 updates of the Jacobean matrix. The problem with Newton's method is that you have to compute derivative matrix Jacobean matrix. If there are n equations and n variables every iteration you have to compute an n cross n matrix and this can be numerically quite complex if you have thousands of equations. This quasi Newton methods allow you to do approximate update of the Jacobean. So they construct a new Jacobean using the old Jacobean and this way they save computations. So we are going to have a brief introduction to this quasi Newton methods. Then we move on to solving non-linear algebraic equations using optimization. Numerical optimization is a powerful tool which is used for solving non-linear problems, non-linear algebraic equations. One of the popular method in this class is conjugate gradient method. So we will have a brief look at conjugate gradient method. This is a gradient based method. There is a Hessian or second order derivative based method which are called in this category they are called as Newton's method. We also have quasi Newton method which are again simplifications of Newton's method or Hessian based methods. So we will have a brief peak or brief introduction to quasi Newton methods and finally we will look at a method called Leverberg-Marquard method which is combination of the gradient method and Newton's method. So you use gradient when it is helpful to use gradient. You use Hessian when it is helpful to use Hessian. So it is a merger of the two methods and we will just understand this towards the end. We will just briefly understand the concept of condition number of set of non-linear equations. You cannot have one condition number. You can define a local concept of condition number here which is conceptually similar, qualitatively similar to what we have done for linear algebraic equations. So before we wind up this particular module we look at two important aspects. One was existence of solution of non-linear algebraic equations and its relation to convergence of iterative methods. When we started studying linear algebraic equations we began with the conditions for existence of solutions. We never talked about this when we started solving non-linear algebraic equations. Here I want to give a brief introduction to the conditions of existence of solutions and what is its relation to convergence of iterative methods? We look at contraction mapping principle or contraction mapping theorem. We will apply it to understand convergence of method of successive substitutions. We will also see how contraction mapping principle can be used to analyze Newton's method or Newton-Raphson method and with this we have come to or we will come to an end of module 5 which is on solving non-linear algebraic equations. So we move on to the last tool that will be discussed in this course that is solving ordinary differential equations initial value problems. So this is another fundamental tool which can be used to attack or to solve the transform problem. So what are the learning objectives? Here as it is evident from problem transformation module that many situations when you transform a problem you get ordinary differential equations initial value problems. So this is one of the fundamental model type or equation type which needs to be dealt with and we have to arrive at or we have to develop special methods to solve this class of problems. So in the beginning we will very briefly introduce the conditions for existence and uniqueness of solutions of ordinary differential equation initial value problem. This is very very brief introduction and then we immediately move to study of analytical solutions of linear ordinary differential equations in multiple variables. Well you might wonder why am I doing this analytical solutions in a course which is meant to be for constructing numerical solutions. Well this analytical solution part gives in-depth understanding how local solutions behave. Also this is going to help us when we understand or when we analyze convergence behavior of numerical schemes for solving ODIVP. So as a background to develop numerical schemes I am going to solve analytically linear ordinary differential equations given initial conditions. So I will start with a scalar equation move on to vector equations and then what is critical here is that I want to relate. So what are these kind of equations I am going to look at dx by dt is equal to Ax where Ax is a matrix and then I want to understand relationship between the Eigen values of matrix A and analytical solution of this differential equation dx by dt is equal to Ax. Then actually you can get the Eigen values of this matrix I can qualitatively tell how the solution is going to behave asymptotically as time goes to infinity. So just looking at Eigen values we can analyze the behavior of the solutions and this elegant part we are going to study briefly. And then what is the relationship of linear equations and local linearization through Taylor series approximation is what we are going to look at here at end of this sub module. We now move to the proper numerical methods for solving ODIVP. So before that we need to understand some basic concepts like marching in time, how do you develop a solution, you want to solve a problem, you want to integrate a differential equation from some time 0 to time infinity you actually do it in small steps this is marching in time. So we will talk about this if you look at the methods for solving numerical methods for solving ODIVP problems there are two classes one is explicit methods other are implicit methods. So we will just have understanding of what is an implicit method what is an explicit method and then we move on to study an important class of methods which is based on Taylor series approximation. Popularly these methods are known as Runge-Kutta methods they actually arise from Taylor series approximation and this is where I relate it to the approximation theory part that we have done earlier. So we will actually derive here Runge-Kutta methods starting from basics initially for scalar case and then move on to the multivariate case. We then move on to the next important method which is based on polynomial interpolation. So again you will see that the ideas of approximation theory are playing a role when you are actually solving ordinary differential equation initial value problem. So those ideas are so fundamental they just are everywhere in numerical analysis. So we are going to study methods called as multi step methods or popularly known as predictor character methods. We will develop we will derive these algorithms starting from scratch starting from interpolation polynomials and first for the scalar case and see how they can be generalized to multivariate case and then move on to solving initial value problems ordinary differential equation initial value problems using orthogonal collocations. Well after that we actually have a brief look at convergence analysis of numerical schemes for solving initial value problems OD initial value problems and what is its relationship with selection of integration step size. When you are integrating nonlinear differential equations one of the key things is how do you select integration step size okay to get an understanding into this we have to have some understanding of you know convergence analysis. So we will analyze of course linear ordinary differential equation initial value problems use and we will apply approximate solutions to these linear problems. We already know their exact solutions and then we can compare exact solution with approximate solution and get insights that is the reason I introduced analytical solution of linear OD IVP in the beginning. Then we will see how this can be extended to nonlinear OD IVPs will look at few concepts which are important in solving these equations like stiff ordinary differential equations. So stiffness of ODs is what we will look at and then finally we will look at what are called as variable step size implementation of these OD IVP schemes with accuracy monitoring. So these are all involved concepts of course most of the tools that you use today most of the programs that are available will have these built in tools you should know when to use which one to use why to use particular choice if you have a stiff differential equation you should use a particular tool if you have you know variables which are too much difference in their time scales you should use variable step size implementation and so on. So these things become very very important when it comes to in the end I am going to talk about solving differential algebraic equations we have studied differential equations we have studied algebraic equations nonlinear algebraic equations just a brief look at how do you solve differential algebraic equations if they if they are encountered together. Then we will look at a special method for solving ordinary differential equations boundary value problems called method of or a shooting method. So actually you use a initial value problem solver to solve a boundary value problem. So how this is done we will look at this method and then again we will look at conversion analysis of solvers for OD IVP. So this brings us to an end of this six modules introduction to this six modules. So if I want to sum up what is what is you know overall learning objective in this course in this well first is you should know how to transform a mathematical problem at hand into a computable form using of course principles of approximation theory that is the almost half the course is devoted to that. Then understand basic properties of different tools particularly three different tools solving linear algebraic equations solving nonlinear algebraic equations and solving ordinary differential equations subject to or given initial values or initial conditions OD IVPs. Understand different methods of different numerical schemes for solving these standard class of problems and understand their limitations. So that if you understand their limitations if you understand their strengths if you understand how they are developed you will be in much better position to employ them use them to concoct a recipe. Finally what I wanted to learn or to understand is that a numerical scheme is actually like a recipe and you are going to be a cook who will actually be able to cook a recipe a cook or cook a recipe for a given problem. So you have these fundamental tools you have some fundamental tools coming from approximation theory use a combination of them first combination of tools from the approximation theory to transform the problem then you solve the transform problem using standard tool kits that you have. So this journey is going to be fairly long it is about 48 lectures and we begin our journey from the next lecture. Thank you.