 Welcome to this lecture. In this lecture, we are going to discuss about canonical forms for constant coefficient partial differential equations of course in more than two independent variables. The outline for today's lecture is we start discussing about canonical forms and then we actually do canonical forms for equations with constant coefficients. For linear PDEs in two independent variables, we derived canonical forms. Recall that we did not even define what does the word canonical form stands for. It was mentioned that we want the equation to look like wave equation, E.T. equation or Laplace equation as far as the appearance of the second order partial derivatives are concerned. Some books define what is a canonical form. They say that an equation is in canonical form if mixed partial derivatives do not appear. That is the definition. In the principal part, second order linear PDEs in the principal part, there will be only second order partial derivatives. C pinch over Rubinstein's book on PDEs. They hint this kind of definition. But if you adopt that definition, then w xi eta equal to 0 which is a canonical form for wave equation will no longer be a canonical form. If you adopt this definition, saying mixed partial derivatives do not appear in the principal part. And somehow we do not like this. Why is that? Because this equation w xi eta equal to 0 is far more easier to find its solutions than let us say u T t minus c square u xx equal to 0. u T t minus c square u xx equal to 0 is a wave equation. And there are no mixed partial derivatives. How do you solve it? I do not know. But when you convert that into w xi eta equal to 0 with xi and eta as the coordinate transformations which are coming by solving a characteristic equation, things are nice. So, it is not a canonical form if you follow this definition as in this book. Now, suppose we adopt such a definition. Let us say that does not matter. My definition of canonical form is that equation in which no mixed partial derivatives appear. Fine. Of course, very easy to define. Anything it is very easy to define. No problem. But can we guarantee that a canonical form exists? This is a natural question. Otherwise, there is no use of this definition. So, can we guarantee that a canonical form exists? It is a cumbersome task to transform a given equation into its canonical form. Remember what you need to do? We have to make sure that in the new change of coordinate system which we have to find, no mixed partial derivative appears. So, that is a system of nonlinear PDEs for determining p1, p2, pd which go on to define the coordinate transformation. And the system to be satisfied is this. Summation kl equal to 1 to d, akl dou phi i by dou xk dou phi j by dou xl equal to 0. And this should happen whenever i is different from j. Because aij is the coefficient of dou 2w by dou eta i dou eta j. You do not want mixed partial derivatives to appear. Therefore, when i is not equal to j, you want aij to be 0. How many equations are there? They are d into d minus 1 by 2. So, when d equal to 2, n of 2 is 1, n of 2 is 1. We need to determine two functions phi 1 and phi 2. Fine, no problem. Two functions, one condition. Looks good. n of 3 is 3. Three equations, three unknowns. Nonlinear equations, that is anyway there. But at least three relations, three conditions and three functions. Maybe looks reasonable. But d into d minus 1 by 2 equations to determine d functions. This number is much more than d if d is bigger. When d is greater than or equal to 4, number of equations is more, number of constraints to find the function phi 1, phi 2, phi d is bigger than d. So, more equations than the number of functions to be determined. So, therefore, the system is what is called a over-determined system of PDEs. More restrictions than what you need to find, the number of things that you need to find. In such cases, the natural thing to believe is that perhaps there are no solutions unless some magic happens, you may not have solutions. Okay, even if you have solutions, finding them is not easy because if d is bigger, n of d is also a big number. So, obtaining canonical forms is difficult. Perhaps impossible, we do not know, from d equal to 4 onwards. Therefore, we abandon this idea of finding canonical forms if the number of independent variables is 3 or more. We do not do that. Now, why are we discussing for constant coefficient case? When the equation is with constant coefficients, some miracle happens, things are easy, you can find. So, we understood the near impossibility of reducing linear PDEs to canonical forms in higher dimensions. However, for constant coefficient PDEs, finding canonical forms is easier. A second order linear PDE with constant coefficients in d independent variables is of this form, exactly as earlier DL, but now it is DLCC, therefore the coefficients A, I, J, B, I, C, and D are constants. In fact, this D need not be constant, does not matter because everything depends only on this. So, these are numbers. Let us take that. Anyway, we are assuming constant coefficients. Assume everything is constant. D need not be constant. D can be a function of x, it can be on the right hand side, does not matter. It does not change the discussion. So, assume that the matrix is symmetric as before. Now, since A is a symmetric matrix with real entries, there is an orthogonal matrix Q. What is the orthogonal matrix? Q transpose Q is identity. This stands for transpose. With what property? Q transpose AQ is a diagonal matrix with entries lambda 1, lambda 2, lambda d. So, when this lambda 1, lambda d just stands for lambda 1, lambda 2 on the diagonal integer d by d matrix, everywhere else it is 0 matrix. That is what is stands for. So, symmetric matrix is diagonalizable with an orthogonal matrix as this transformation, similarity transformation. How to get that Q? We know, we have to put the columns of Q as eigenvectors of the matrix A, that is from linear algebra. So, lambda 1, lambda 2, lambda d are eigenvalues. All of them are real numbers because the matrix is symmetric matrix. In general, for a matrix with real entries, eigenvalue can be a complex number. But if the matrix is symmetric, eigenvalues must be real numbers. Denote the ith column of the matrix Q by Q i. Ith column of the matrix Q by Q i. So, if this is ith column, that means it will be Q 1 i, Q 2 i, etc. Q di. This is what is called Q i, ith column of the matrix Q. Define phi i of x equal to Q i dot x. This is a dot product. Dot is hidden inside this. It is Q i, it is a vector dot x, Q i dot x. So, we have defined eta 1, eta 2, eta d. So, since the matrix Q is invertible, this linear transformation is also invertible. X 1, X 2, X d going to Q i dot x. Q 1 dot x comma Q 2 dot x up to Q d dot x. That is invertible. This map is invertible. The principal part of the PDE in the new coordinates is this. Summation lambda i dou 2 w by dou eta i square. Let us call this as new principal part, new PP i equal to 1 to d. Since the classification type is invariant under coordinate chain transformations, the type of the PDE DLCC which is given to us may be determined from using a new dot PP. So, classification for equations with variable coefficients is based on characteristic surfaces. Now, we will see what characteristic surface become when the coefficients are constant. Things become much easy and much more easy if you are in this new coordinate system eta i which is defined on the previous slide of this. eta i equal to Q i dot x. Things are much simpler. So, equation for a characteristic surface gets simplified for equations with constant coefficients using new PP. So, regular surface phi of eta 1, eta 2, eta d equal to 0 is a characteristic surface if and only if this is 0. Once you know the gamma in terms of beta i's, you can always write down in terms of x i's, x 1, x 2, x t. So, this is the equation for a characteristic hyper surface. Phi equal to 0 is a characteristic hyper surface if and only if this equation is satisfied. Now, if you observe if all lambda i's are of the same sign, okay, nonzero and same sign that means sin already means nonzero, then this will either be positive or negative. So, it will never be 0 which means there are no characteristic surfaces. If all lambda i's are the same sign. So, therefore the PDE dlcc is of elliptic type. If all the eigenvalues, what are lambda i's? They are the eigenvalues of A. If all the eigenvalues of A are same sign, then dlcc is of elliptic type. Now, how about parabolic type? When is it parabolic type? Parabolic type, what is the definition? It says one of the independent variables should be missing in the principal part. At least one of the eigenvalues A is 0. If lambda k is 0, then in the equation, dou 2 W by dou eta k square does not appear, okay. Yes. So, the PDE dlcc is a parabolic type. If at least one of the eigenvalues of A is 0 and all other eigenvalues must be of the same sign. So, the second order derivative dou 2 W by dou eta k square, it does not appear in new pp if lambda k is 0. Therefore, dlcc is a hyperbolic type. Now, we have to say it is hyperbolic type. If it is not elliptic, it is not parabolic. That is the definition. Now, what does that translate to in terms of the eigenvalues of A? This is the case when the matrix A has at least one positive eigenvalue and one negative eigenvalue, okay. Not elliptic means, elliptic means what? All eigenvalues of same sign. Not elliptic means eigenvalues of different signs. Not parabolic means no eigenvalue, at least one eigenvalue is 0. That is a parabolic. So, what do you mean by not parabolic? That also if you input, you will get this case. This will happen. When this will happen, then it is hyperbolic. So, as per our classification, this equation which is here is a hyperbolic equation. We have to ask what the eigenvalues are. They are minus 1, minus 1, 1, 1 because the matrix A is 1, 1, minus 1, minus 1 diagonal matrix. So, when compared to the wave equation, the above equation has two time-like variables because wave equation eigenvalues are like 1, 1, minus 1 or minus 1, minus 1, 1. That is all. But here there are two eigenvalues which are negative, two eigenvalues which are positive. So, we may say it has two time-like variables. So, some authors call such equations as ultra-hyperbolic. So, more precisely the definition of ultra-hyperbolic is a following. Some equation is called ultra-hyperbolic for which the matrix A has at least two positive eigenvalues and two negative eigenvalues and none of the other eigenvalues is 0. Our definition of hyperbolic will allow some eigenvalue to be 0. But ultra-hyperbolic, by definition, there are at least two positive eigenvalues, at least two negative eigenvalues and none of the other eigenvalues is 0. That is what is called ultra-hyperbolic. It is just for a definition sake. Now, let us look at an example and determine its type uxx plus uyf plus uzz plus 2uxy, 2uyz plus 2uxz equal to 0. So, what is the matrix here? The diagonal entries are 111 and these are off diagonals. They are also 111. So, this is the coefficient matrix. And we have to ask what are the eigenvalues. This is a very well-known matrix by now. Everybody knows its eigenvalues. This matrix is clearly singular. A rank is 1. Therefore, nullity is 2, which means 0 is an eigenvalue of multiplicity 2 and 3 is another eigenvalue, which is a sum. If you notice, sum of each row is actually the same constant. Therefore, if you look at 111, that will be an eigenvector with eigenvalue 3. So, we know everything very clearly here. So, the eigenvalues are lambda 1, lambda 2, 0. These are two eigenvalues. And we need to take eigenvectors, which are, if you look at carefully, I am putting a factor of 1 by root 2 and 1 by root 6 here to make the length to be 1. It is of unit length, this is of unit length and they are orthogonal to each other. This dot product with this vector is 0 because q is orthogonal matrix I have to construct, right? That is fine. And the other eigenvalue is 3 and eigenvector is 111. I put 1 by root 3 to get that length is 1. And of course, this is orthogonal to both of them. Now, I am going to consider q like this. The first column is actually the one of the eigenvectors for 0 eigenvalue. This is second eigenvector that we have written down for 0 eigenvalue. This is eigenvector for the eigenvalue 3. Now, q transpose A q will become the diagonal matrix 003. So, these are the change of variables phi 1 equal to x minus y by root 2, phi 2 is x plus y minus 2 z by root 6 and phi 3 is x plus y plus z by root 3, then the PDG transforms to this. This is what exactly we saw in the theory, right? Take a q such that q transpose A q is the diagonal lambda 1, lambda 2, lambda d, then introduce new change of variable zeta i equal to q i dot x. Exactly this is q i dot x. In this case, I have written down x, y, z. So, q i dot x, y, z is precisely this. So, you will end up with lambda 3 and dou 2 w by dou eta 3 square. Characteristic surfaces are eta 1 equal to 0 and eta 2 equal to 0 because they do not appear in the new PP. What is gamma 1 0? It amounts in x, y coordinates to x minus y equal to 0 and this is x plus y minus 2 z equal to 0. The given PDEs of which type? Parabolic. So, let us summarize what we did in this lecture. Impossibility of finding canonical forms for linear PDEs with variable coefficients was discussed. Impossibility or near impossibility. Canonical forms for linear PDEs with constant coefficients were obtained. Classification for linear PDEs with constant coefficients was understood more clearly. Thank you.