 Good morning. As we saw in the last lecture, the method of undetermined coefficients for solving the linear non-homogeneous ODEs is limited by the necessity of constant coefficients and it can handle only a few selected functions as the right hand side, namely the polynomials, exponential function, sinusoid and their combinations. Now, for general situations in which the right hand side function can be anything, the input function can be anything and the coefficients are also not required to be constant if they are functions, if they are variable coefficients, then the general method for solving the non-homogeneous differential equation is the method of variation of parameters. In this, we first note that if y 1 and y 2 are two linearly independent solutions of the corresponding homogeneous equations, then this combination, this linear combination with constant parameters C 1 and C 2 gives us the general solution of the homogeneous equation, that is the complementary function for the purpose of the non-homogeneous equation. Now, for the non-homogeneous equation, the solution needs to be linearly independent of these two basis members. Now, as long as these parameters are constant, we cannot have linearly independent function. So, what we try to think is that how about making these parameters as variables, that is as functions of x and as we do that and we replace C 1 and C 2 by two functions u 1 of x and u 2 of x, then what we are doing? We are conducting a variation of parameters and that is why this method is called variation of parameters. So, we consider variable parameters. So, we propose the particular solution of the non-homogeneous ordinary differential equation as u 1 y 1 plus u 2 y 2 and then we ask for the functions u 1 and u 2 that will make this y p satisfy the given differential equation. Now, one point is very clear that we can take this y p and force it to satisfy the differential equation, that will mean that we will differentiate it twice and then take this y p and the resulting y p prime and y p double prime and insert in that equation and we will say that this equation is now satisfied, but that will give us a single second order ODE in u 1 and u 2. Now, there are two unknown functions to be determined and for that a single differential equation is not enough. We will need two differential equations to solve for two unknown functions. So, one more condition is needed to fix them. Now, at this stage we need one more condition between u 1 and u 2 apart from the condition that together they have to make y p satisfy the differential equation. Apart from that we need one more condition and we try to frame that additional condition in such a manner that our work is also reduced, our work is also made more easy. The way to do that is at first as we differentiate y p this function this proposed function we get u prime u 1 prime y 1 plus u 1 y 1 prime from the product rule. Similarly, from here u 2 prime y 2 plus u 2 y 2 prime. So, these are the four terms in the derivative of these two terms here. Now, we know that as we differentiate it further then u 1 u 2 y 1 y 2 all four of them will generate their second derivatives and that means that the resulting expression for y p double prime will have up to second derivatives of all the four functions y 1 y 2 u 1 u 2. With y 1 and y 2 there is no difficulty because they are known functions but, u 1 and u 2 when they are when involve their second derivatives that mean that the resulting differential equation will be a second order differential equation. The additional condition that we need we can choose in a particular manner which will obviate that second order. So, what we do is that from here we collect the terms which has the first derivatives of the unknown functions u 1 and u 2. So, that means this term and this term and together we say that we will impose the condition on functions u 1 and u 2 that this term and this term together vanishes that is we impose this plus this equal to 0. If we do that then y p prime reduces to the sum of the rest of the two terms this. Now, in this expression in y p prime we have got the derivatives of y 1 and y 2 but, not the derivatives of the unknown functions. So, that way when we now differentiate this we get the derivatives of the unknown functions only up to first order. So, that is this y p double prime that again with the ordinary product rule will give us u 1 prime y 1 prime plus u 1 y 1 double prime and similarly two terms from here right. Now, here again we find that the unknown functions u 1 and u 2 are involved only up to the first derivative known functions y 1 and y 2 are involved up to second derivatives but, that is not a difficulty. Now, on this note that another condition like this we cannot impose because if we impose another condition like this that is if we try to say that these two together also vanish then that will not make sense because this condition and that condition together will imply that u 1 prime and u 2 prime are all are both 0. Because these two this condition and this condition together will mean that we are looking for u 1 prime u 2 prime which satisfy a homogeneous system of linear equations with non-singular coefficient matrix. So, that will give only the trivial 0 solution. So, u 1 prime u 2 prime will become 0 which means that u 1 and u 2 will turn out to be constant and that will not give us the solution of the non-homogeneous equation. So, obviously only one condition we needed and that we have got the second condition we need to get by substituting this y p double prime this y p prime and this y p sorry this y p double prime this y p prime and this y p into the differential equation. As we do that y p double prime these four terms plus p into y p prime plus q into y p is equal to r x this is the given differential equation. Now, as we rearrange these terms collecting together all terms which have u 1 that is this is the first term then from here we will get p y 1 prime and then from here we will get u y 1. So, we collect the u 1 terms together and u 2 terms together. Now, we know that y 1 and y 2 are solutions of the corresponding homogeneous equation that means that this equal to 0 and similarly this equal to 0. So, then considering these two 0 terms we get u 1 prime y 1 prime plus u 2 prime y 2 prime as equal to r x. Now, this condition that we imposed through which all we also simplified the expression of y p prime through which we avoided the second derivatives of u 1 u 2 in the resulting equation here that is one equation and the given differential equation gives us the second equation in two unknowns are u 1 prime and u 2 prime. Now, this equation and this equation if we write in the proper matrix vector form then y 1 y 2 y 1 prime y 2 prime come in the coefficients and we get the system of equation in the standard form. Now, if we solve this then we will get u 1 prime and u 2 prime and we already know that this matrix is non singular for all values of x under question in the interval for which we are looking for the solution. Now, the determinant of this matrix is actually the Ronskian. So, as long as y 1 and y 2 are linearly independent solutions forming the basis for the solutions of the homogeneous equation. So, we know that these are two linearly independent functions. So, this Ronskian will be non zero. So, that means that the solution of this will give us unique u 1 prime u 2 prime in other words this is an invertible matrix. So, we get that solution as we get that solution from the two equations here we get u 1 prime as this and u 2 prime as this Ronskian goes in the denominator which is not a problem because it is non zero and then this is a known function of x. Similarly, this is another known function of x. So, direct integration will give us u 1 and u 2. These u 1 and u 2 when put in the proposed form of the particular solution will give us the particular solution that we are looking for that is u 1 y 1 plus u 2 y 2. So, this is the method of variation of parameters to find the solution of the non homogeneous differential equation in a general situation. And the first step even in this is to find the solution complete solution of the corresponding homogeneous equation that is the first step because they will form this matrix here. So, this is applicable for all functions p x p q x and r x which are continuous and bounded and it will be valid only for that interval in which these functions remain. So, in this lesson these are the important points that we have studied after looking at the function space perspective of linear odys we studied these two methods undetermined coefficients and variation of parameters. These same methods and the previous discussions on finding the solution of the corresponding homogeneous equation all these now we generalize for higher order differential equations. So, first we consider this equation this is the general linear or a differential equation of say n th order. So, up to the n th derivative is appearing linear and these are coefficient functions which depend on x only. Now, understand again that we are here going to consider only those cases where p 1 p 2 p 3 up to p n and r these are functions of x which are continuous and bounded in the interval under discussion. Now again for this we will have the general solution in this manner as y h plus y p where y h is the complementary function in other words y h is the general solution of the corresponding homogeneous equation that is this equal to 0 that is it is a solution it is a it is a general solution of this equation. Now for the homogeneous equation suppose we have got n solutions y 1 y 2 y 3 etcetera then the state vectors of these solutions when assembled as column vectors in a matrix then what we have it is an n th order differential equation. So, state vector will have the function itself it is first derivative second derivative up to n minus 1 as a derivative. So, this is the state vector of the first solution. Similarly, this is the second vector second state vector for this corresponding to this and so on. So, when you assemble these as columns then we get an n by n matrix of functions that is the first row is these functions second row will have its first derivative third row will have the second derivative fourth row will have the third derivative and so on the last row the nth row will have the n minus 1 as a derivative of these functions. So, this is the fundamental matrix which will appear again and again now first point is what is the Ronskian in this case. So, the determinant of this matrix is the Ronskian in this case this is a direct generalization of the case of second order differential equation. So, this is the Ronskian of all these solutions all these n solutions. Now parallel to the theory that we studied in the case of second order differential equations here we will have corresponding points. First point is that if these n solutions of the homogeneous equation are linearly dependent then the Ronskian will be 0. How do we prove that? We say that if these are linearly dependent then for a non-zero vector k we can find this that is we can have a linear combination of the n solutions 0 even though each contribution is not individually 0 that is for a set of k's k 1 k 2 k 3 up to k n not all 0 together we can find a linear combination of these y's which turns out to be 0 if they are linearly dependent. And if so that will mean that the derivative of this will give us sigma that is sum of k i y i prime as 0 is k i y i prime that is j is equal to 1. The derivative of that will give us k i y i double prime equal to 0 that is for j equal to 2 and so on. That means up to n minus 1 we can construct these sums and get them 0 if the solutions are linearly dependent. Now note that this will mean this and this together will mean that the fundamental matrix y into k is equal to 0 which means that its first row is this, second row is this for j equal to 1, third row of this matrix vector equation is this same thing for j equal to 2 and so on. So, the complete n rows we can construct like this and note again that if y 1 to y n are n linearly dependent solutions then we can construct such a matrix vector equation homogenous equation 0 on this side and have a solution which is not 0 that is vector k is not 0 and that means that this coefficient matrix y must be singular because for the homogenous system of equation to have a non-trivial solution k it is necessary that the coefficient matrix is singular. So, y axis is singular which means that it is determinant the Ronskian is 0. So, this is the first point that is the n solutions of the homogenous equation being linearly dependent will imply that the Ronskian is 0. Now if the Ronskian now we try to show the converse if the Ronskian is 0 at some point then y this matrix at that point evaluated at that point is singular and then that will mean that a non-zero vector k can be found in its null space which will in turn give this which will imply that the solutions are linearly dependent. So, this is the converse of this now together they also mean that 0 Ronskian at some point x 0 will immediately imply 0 Ronskian everywhere because 0 Ronskian at some point will mean that the solutions are linearly dependent which in turn will mean that the Ronskian is 0 everywhere this x and that will again mean that non-zero Ronskian at some point will ensure non-zero Ronskian everywhere and the corresponding solutions as linearly independent. So, now what we do with those linearly independent solutions if the Ronskian is non-zero we say that with these n linearly independent solutions of the homogenous equation we have the general solution in this manner as we had for the case of second order differential equation and this linear combination the complete solution of the homogenous equation will serve as the complementary function for the solution of the non-homogenous equation. Now, we consider the first case of constant coefficients consider this constant coefficient case for homogenous equation for which the complete solution of homogenous equation we can find if the variable if the coefficients are functions q 1 x p 2 x and so on then in general we are not able there are no analytical procedures to find out all solutions of this homogenous equation. Now, in this case if the coefficients are constant then we can do that exactly as the second order case with necessary extensions and changes here also the left hand side of this equation has y y prime y double prime in linear combination. So, we expect solutions of this form exponential solution. So, if we take a trial solution of this kind and insert it its derivatives as many times we differentiate that many lambda get multiplied with it. So, we get the auxiliary equation as lambda to the power n plus a 1 lambda to the power n minus 1 and so on till from here we will get a n. Now, this is a polynomial equation of n th degree that means it will have n solutions. So, out of from those n solutions for every simple real root that is if gamma is one solution of this polynomial equation appearing only once that is in its factorization if lambda minus gamma appears only once that is that makes it a simple root. So, then corresponding to that e to the power gamma x will be a solution that is for every simple real root the corresponding exponential function will be a solution of this differential equation because that is the idea with which this was started for every pair of complex roots like this mu plus minus i omega. You will say that e to the power mu plus minus i omega will be two solutions in d d s and in the case of second order differential equation we reorganize it and found the two solutions in a different manner that was e to the power mu x into cosine omega x and e to the power mu x into sine omega x. So, these two will be the corresponding to linearly independent solutions of this homogeneous equation. Now, if we get a real root which is repeated then if it is multiplicity is r then we will find that e to the power gamma x x into e to the power gamma x x square into e to the power gamma x and so on up to x to the power r minus 1 into e to the power gamma x all these will give us r linearly independent solutions of this. This is something which you can prove which you can show similarly for every pair of complex roots which appears repeated you will have similar additional linearly independent solutions. For example, if mu plus if mu plus minus i omega appears twice. So, that should account for four linearly independent solutions. So, first pair is e to the power mu x cosine omega x e to the power mu x sine omega x and the next pair will be these two multiplied with x. Similarly, if it appears twice the third pair will appear with x square e to the power mu x cosine omega x and x square e to the power mu x sine omega x and so on and so on. So, with multiplicity r you will go up to x square r minus 1 power. So, this way if you determine the solutions n solutions of this polynomial equation then from that you can determine n linearly independent solutions of this homogeneous differential equation. Now, after having the solutions in this manner you want to find out the solution corresponding to the solution for the non homogeneous equation with this whole thing equal to some r x. Now, if the equation is with constant coefficients like this and r x is one of those chosen function for the right hand side that is one of those special functions polynomial exponential sinusoid or their mutual combinations then the method of undetermined coefficients will give you a solution that is for this. So, method of undetermined coefficients will work exactly as the extension of the second order case. Here also those things those modification rules will apply that is even if this happens to be e to the power gamma x, but then if e to the power gamma x happens to be already there in the basis for solution of this then rather than proposing e to the power gamma x you will propose x into e to the power gamma x. Similarly, if e to the power gamma x x e to the power gamma x and so on up to x to the power r minus 1 e to the power gamma x up to this if all of these are setting already in the basis for the solution of this then even if e to the power gamma x appears in the right hand side function you will have to propose as y p x to the power r e to the power gamma x that is the next into some constant and so on. So, this is a general rule now. So, method of undetermined coefficients is something which will work exactly as in the case of second order equation and as you know that in the general case with general right hand side function and functions appearing in the place of coefficients a 1, a 2, a 3 this will not be sufficient. For those purposes for those situations you will need the more general method which is the method of variation of parameter which will work in the general case that is in this case that will work in this case as well. So, for that for solving that we will consider a linear combination like this. Now, let us so we propose a function of this kind which is a linear combination of all these n functions y 1, 2, y n which are known to be the basis members of the corresponding solution of the homogeneous equation. The basis members of the solution of the corresponding homogeneous equation that is the solutions of this. So, n linearly independent solutions of this equal to 0 is y 1, y 2 etcetera. So, y p is proposed like this. Now, we differentiate this and eventually we will differentiate it n times and insert all the functions and derivatives into this equation in order to find the solution find a particular solution of the non homogeneous equation. Now, as we differentiate first time we find that the derivative of these n terms will get us two n terms. In the first n terms we will have u i prime y i and the other n terms will be u i y i prime. Now, we try to put additional condition that this is 0. So, when this part is 0 then the first derivative of this will turn out to be as u i y i prime. Now, this is 0 means basically that the rate of the function u is orthogonal to the vector function in the vector space sense that is if u of x is a vector function with components u 1, u 2 etcetera. Then, u prime will be its derivative its rate and that rate is orthogonal to the vector function y of x. So, that is the condition we have imposed and that gives us y p prime which is like this. Now, note that this condition turns out to be the first row from this matrix vector equation y 1 into u 1 prime plus y 2 into u 2 prime and so on y n into u 1 u n prime equal to 0 that is essentially this condition. Now, imposing this condition we get the first derivative as this which is free from u prime terms which means that its derivative will be free from u double prime terms. So, we find out its derivative and as we try to put its derivative in that there will be again 2 n terms n terms together are taken as 0 that is the second condition that we have imposed. So, as we impose that from the derivative of this u prime y prime terms the sum of those terms vanish that is the condition we impose and the rest of the 2 terms u into y double prime that remain here. Now, this condition is equivalent to the second row that is sigma y i prime u i prime y 1 prime into u 1 prime plus u i 2 prime into u 2 prime and so on is equal to 0 that is the second row in this matrix vector equation. Similarly, we go on differentiating the derivatives further at every step we go on imposing these conditions till the n minus 1 s derivative that gets involved here and corresponding n minus 1 s derivative we get here and this is the point this is the stage where we have to find the last derivative that is needed that is y n y p n. So, when we differentiate this we get the full derivative in which full 2 n terms come u i prime y i n minus 1 plus u i into y n. So, these 2 both the terms will appear here in the n s derivative. So, note that from this point till this point there are n minus 1 conditions that have been imposed with the function the first derivative second derivative up to n minus 2 s derivative. So, these are n minus 1 conditions that have been imposed these are the top n minus 1 rows from here involving in the coefficient here up to the n minus 2 s derivative. The last equation we will get when we insert this y p n this expression and all these expressions for the previous derivatives and the function itself also. So, when we put all of these expressions now into this differential equation given then what we do we add to this last equation p 1 times this plus p 2 times the previous one plus p 3 times the still previous one and so on. Finally, p n minus 2 times this plus p n minus 1 times this plus p n times this that is from here. As we do that and collect the terms together then this term is here this term is here u prime y n minus 1. And here the other terms that is this plus p 1 into this plus p 2 into previous one and so on. So, there you will find that u is common in all these even up to this. So, in the that is why u i has been kept common outside and inside the bracket you have y i plus p 1 into the y i n minus 1 s derivative and plus p 2 into y i n minus 2 s derivative and so on that is this. Now, we know that each of the functions y 1 y 2 y 3 y 4 etcetera are solutions of the homogeneous equation corresponding homogeneous equation that is this equal to 0. And therefore, for every i separately this bracketed term vanishes. So, this whole thing goes to 0 this remains on the left hand side and r x remains on the right hand side. And that is why we get this last row from this system of equation. And what is the matrix sitting here? This is our old friend that is the fundamental matrix capital Y that we constructed from the state vectors of these n linearly independent solutions of the corresponding homogeneous equation these are the columns. So, here is our fundamental matrix capital Y of x. So, this is the last equation and this is the assembled matrix vector equation together Y of x this matrix into the vector u prime is equal to this vector with all 0s and r x at the last entry that can be said as e n into r x. So, it will mean that last one is 1 and here we have r x it is the same thing. So, this is the e n that is the last natural identity member in the n dimensional space. So, now this is the matrix vector equation in the rate u prime and this matrix is non-singular because these solutions form a basis that means they are all linearly independent. So, this matrix is non-singular as it is non-zero. So, this non-singular matrix can be inverted and this matrix vector equation can be solved. Now, usually I have told you earlier that in the context of linear algebra that the computation of inverse numerical computation of inverse with the help of this formula is typically inefficient, but here we are doing the analysis in terms of not numbers, but in terms of expressions and in this context this is formula turns out to be of advantage. So, inverse of y is adjoint by determinant. So, if we use that then the adjoint by determinant will come here and we can calculate u prime x. So, that is 1 by determinant into adjoint into this side. Now, note that this adjoint matrix will be multiplied with a vector immediately which has all its entries 0 except the last one. So, from the adjoint only the last column is needed that is adjoint y into E n will essentially mean the last column of adjoint y. So, that means when we construct adjoint y we do not construct the entire adjoint of this matrix, but we need to construct only the last column of it. So, last column of its adjoint will mean will require the co-factors of only the last row, because adjoint will be the matrix formed by the co-factors of this matrix transport. So, the last column of the adjoint will be formed by the co-factors of the last row. So, last column of the adjoint matrix is what we will need that is adjoint into E n. So, for that we will find out the co-factors of the elements from the last row only. Now, what is the co-factor of this entry? For the co-factor of this entry we will need to remove this column and this row and whatever remains. And for that we have to find the determinant and then we will put the sign according to the row number column number of this that is as which is the row number n and 1, 2, 3 that is the column number. So, that means the position of this will reflect in the sign and other than that this column and this row will be removed. So, the co-factor of this element we can get by replacing this column with 0, 0, 0, 0, 0, 0, 1 because this column anyway will be removed. So, if we put 1 here then as we remove this row and this column the rest of the determinant will give us the correct co-factors with the sign that we will get from the position. So, that means if we put if we replace this third column with E n that is 0, 0, 0, 0, 1 here then the determinant of this itself will be that co-factor because the in the determinant computation through this column we will get only one term which is 1 into the rest of it which is the correct minor and the position we will get from the expression of the determinant anyway. So, that means that the entries the first second third entry will be given as w 1, w 2, w 3 when w i is the same Ronskian same determinant of this matrix when evaluated with E n in place of the is column. So, this gives us the expression for u i prime and then when we find that this is a function of x only then we can evaluate u i as direct quadrature, direct integration and then this set of u 1, u 2, u 3, u 4 when inserted in the original proposal gives us a particular solution of this non-homogeneous equation. So, this is the way we develop the general solution for a non-homogeneous equation that is we first develop the general solution of the corresponding homogenous equation by replacing the R with 0 from that we find out the basis for all solutions of that homogenous equation that basis y n, y 1 to y n we use to propose a solution of the non-homogeneous equation including the R x and then based on this and based on that means based on this expression which determine the proposed coefficients, coefficient functions u 1, u 2 etcetera and then this gives us one solution of the non-homogeneous equation and then y 8 plus y p gives us the complete solution for the non-homogeneous equation. Now, many of these issues that is the Ronskian for the higher order or the basis for describing the solutions of the homogenous equation and then the way these basis members can be used to find a particular solution of the general non-homogeneous equation all these will find similar analogous situations when we discuss the solution of systems of ordinary differential equations in the next lecture, but currently in the remaining time of this lecture we will take a small digression into another topic which is in a way complementary to this classical framework of solving linear ordinary differential equations and that is the paradigm of Laplace transform method. Till now in the classical framework of solution of ordinary differential equations we have typically discussed with the understanding that the entire differential equation is known in advance that is both the left hand side which has terms related to y and its derivative and the right hand side R x. So, the entire differential equation is known in advance this is the typical understanding with which till now we have developed the methods and the typical way to look for the solution in the previous few lectures including the current one has been to go for a complete solution first and after getting the complete solution then if the conditions are available initial or boundary conditions then we can impose those conditions to find out the arbitrary constants arising out of the solution process. Now, this is the typical classical perspective however there can be practical situations where you already have a plant you already have a physical system which means that you know the left hand side of this differential equation completely. Now, that particular system that particular plant can be operated with different kinds of input which means this R x can vary from one application to another that can be a practical situation. So, you have the plant or system means that its intrinsic dynamic model you know and further matter you know the starting condition that is from which initial condition what is initial position what is initial first rate second rate up to n minus 1 s rate. So, all those things are known now as you decide to plug in different kinds of inputs you will be basically changing the right hand side. So, that means that as you drive the plant with different kinds of inputs on different occasions you will be changing the right hand side without changing the left hand side of the differential equation and without changing the initial condition. So, that means that entire differential equation is not known in advance only the left side of it is known the input function side is not known in advance that can change from one application to another. On the other hand the point of using the initial conditions afterwards does not sound so nice because we know the initial conditions beforehand. So, in such a situation another way of solution which takes care of the left side of the differential equation and the initial conditions first in one framework and depending upon the right hand side the last amount of last part of the work can be accomplished differently in different situations that is a paradigm shift. So, with the left hand side of the already known and initial conditions are known up to the right hand side r x changes from task to task. So, Laplace transform method give you a method which takes care of this part completely and keeps the job half done waiting for the right hand side to appear at any time. So, as a different right hand side appears every time the solution can be changed by another little amount of additional work. Apart from that there is another question that may be asked which is answered properly and adequately by Laplace transform method and not so well by the method that we have been discussing till now. And that is suppose this r x is not continuous till now we have been relying upon the existence and uniqueness result which use the notion that all these coefficient functions and r x are continuous and bounded. Now, suppose the input function with which we try to drive the plant try to drive the system is not continuous. So, for example, when power is suddenly switched off or switched on the if the power if the plug is giving you the input function r x and as it is suddenly switched on or suddenly switched off there is a discontinuity at r x say x is representing time here. So, as power is switched on or off what happens what happens is the question that we ask when we pose an initial value problem. So, this initial value problem is it well posed. So, we are asking this question that what happens what is the future evolution of y of t y of x in this context. Now, when we ask is this question is this initial value problem well posed then we say that does a solution to the differential equation exist or does the solution is the solution unique and so on. Apart from switching on or off if there is a certain voltage fluctuation suppose the voltage is somehow related to this input function input function of time. So, with time as the voltage fluctuates in the appliance in some appliance the model the dynamic model of which is sitting here then we say that the equipment the appliance which is connected to the power line what happens to its state how does the state evolve with time. Now, asking does anything happen in the immediate future that is the mathematical the mathematical equivalent of this question does anything happen in the immediate future is basically the question that for this initial value problem does there exist a solution. Now, this question is name because something will certainly happen. So, as something certainly happens we say that this initial value problem certainly has a solution. Now, depending on the R x being continuous and bounded may not help to find the solution in such situation. So, Laplace transforms provide a tool to find the solution in spite of the discontinuity of the right hand side function R x through certain ways of handling discontinuity. So, let us quickly have an overview of the Laplace transform technique the main salient features of it before we proceed to the system of ordinary differential equation. Laplace transform happens to be one particular kind of integral transform of this kind where there is a there is an integral there is a kernel function and f t function for which we are looking for the transform. So, this resulting function t of f t is actually not a function of t because integration with respect to t has been carried over carried out to get that transform function. The transform function will have the other variable s which is typically called the frequency variable. So, with the kernel function like this and limits of integration from 0 to infinity we have what is called the Laplace transform defined by this formula in which case this integral this improper integral is evaluated in this sense. Now, under certain conditions a function f t has a Laplace transform and the corresponding inverse function is called the inverse Laplace transform of a function of the frequency variable f s. So, if f s is the Laplace transform of f t then f t is called the inverse Laplace transform of f s. Now, with some background work people have developed some long tables of Laplace transforms and inverse Laplace transforms which we can keep as reference. So, right now I am omitting the details of this and going directly to the typical methodology of five solutions solving differential equations with the help of the Laplace transform method. So, suppose this is the linear differential equation with constant coefficient and these are the initial conditions second order differential equation. So, two initial conditions will be needed. Now, if we take the Laplace transform of both sides of this using the rules for Laplace transform of derivatives and in that how initial conditions appear etcetera. We will get the Laplace transform of both sides of this equation in this manner s square plus a s plus b y s where y s is the Laplace transform of the unknown function y this is what we want to determine and on this side we will have a plus a k 0 plus k 1 plus r s. Now, up to this point we can do except for this r s even without knowledge of this right hand function r t. The important point to note here is that this differential equation has been broken down to an algebraic equation and from here we can solve for y s in terms of the other quantities. So, this whole thing divided by y s will be the solution that is rather than the solution it will be the Laplace transform of the solution. So, even without knowing the solution y t we know the Laplace transform of it y s. So, then the next step will be after knowing r s also if we know the r s also then this whole thing together that is the inverse of it this is q s. So, that is why q s has been put here. So, then as we solve for y s from this algebraic equation of in capital Y s then we have the Laplace transform of y t. So, it is inverse Laplace transform turns out to be the solution. So, in this the initial conditions have been involved from the very beginning and inversion of the Laplace transform is conducted at the end to resurrect the solution. Now, in this framework of course, we can handle only limited number of plants with constant coefficients and so on easily, but then we can handle discontinuity of r t through two important functions and their Laplace transform. So, this is the unit step function if t is greater than a less than a the value is 0 if t is greater than a then its value is 1 and its Laplace transform has been determined and it is found to be this. Now, if a function appears with a time delay after time a then this is its effect and for that this can be the detail expression and for which we can work out the Laplace transform in terms of the Laplace transform of the original function that is what is the shift in time that is the result of the shift in time. This is one important discontinuous function for which we have the Laplace transform to be used whenever this kind of a this kind of an input appears this is suddenly applied and then after application that value is known. Another important discontinuous function is this derive delta function suddenly there is a huge jump in the value of r. So, and that value is suddenly at that particular moment the value is infinite very large every at the all other time it is 0 and with the special property that under that curve we have under that huge splash we have the area as unity. So, this is the derive delta function or unit impulse function. So, for this also the Laplace transform has been determined and that turns out to be this. Now, with the help of these two discontinuous functions and their Laplace transform we can handle the situations where the right hand side function of the differential equation is discontinuous. So, through step functions and impulse step function and impulse function Laplace transform method can handle initial value problems with discontinuous inputs as well. Now, another important term is there which appears quite often in the discussion of Laplace transform method and that is convolution. Convolution of two functions is actually a kind of a generalized product of two functions which is defined in this manner. Now, you can show that the Laplace transform of the convolution Laplace transform of the convolution is actually the product of the corresponding Laplace transform that is this. So, if you have two functions f and g and their convolution of is 8 then the Laplace transform of the convolution turns out to be the product of the two individual Laplace transform. This is called the Laplace convolution theorem. Laplace transform of the convolution integral of two functions is given by the product of the Laplace transform of the two functions. Now, quite often what happens is that in this context we know q s r s and we know that the moment we say in this particular case if the initial conditions were 0 k 0 and k 1 then these two terms will be 0 and this whole thing will go off and that means after this step with k 0 and k 1 as 0 we can invert this and find out q s and that is that gives us y s as q s r s. Now, rather than finding r s that is after the input function r t has been specified then rather than finding its Laplace transform r s multiplying that with q s and then taking the inverse Laplace transform one could do well to directly find out the convolution integral of q t original q t and r t that is because the inverse Laplace transform of q s r s which we want which we want will be the same as the convolution integral of the original functions in time q t r t. So, in that sense in many cases when the r t changes then rather than waiting for r t to appear and then compute r s quite often from q s itself the transfer function itself this is called the transfer function. From the transfer function itself we find out the corresponding q t through its inverse Laplace transform and keep it and then the moment a new input function r t is supplied then we do not go into Laplace transform further. So, rather than going to Laplace transform to find r s then multiplying this and then taking the inverse Laplace transform we can simply take the new r t and the originally earlier determine q t and construct their convolution through the definition of the convolution integral. So, this is another important issue which appears in the analysis of Laplace transform method and this will again appear when we later study Fourier transform. So, with this little discussion on Laplace transform we will continue in the next lecture to the solution of ordinary differential equation systems. So, ODE systems we will take up in our next lecture and from there we will discuss the stability of dynamic systems which will be solving through these methods. Thank you.