 So, this is again back to non-singularity, so if you recall that if P is a point in X which is an affine variety, it is an affine variety in say An okay then P is called non-singular non-singular called a non-singular point of X okay or we say X is non-singular at P okay. We say X is non-singular at P if for given set of generators for let me not say let me even say for some given set of generators F1 etc f L of the ideal of X okay the rank of the Jacobian matrix of F1 etc f L with respect to the variables X1 etc Xn at the point P is equal to co-dimension of X which is actually which means dimension of the affine space the ambient space minus the dimension of X which means it is n minus dimension of X okay. So, this was the definition of non-singular point a point of an affine variety is called a non-singular point or a smooth point or we say that the variety is non-singular at that point okay if for some given set of generators for the ideal of the variety if you calculate this Jacobian which is given by the partial derivatives of all these with respect to these variables that turns out to be the co-dimension of that turns out to be equal to the co-dimension of X okay and co-dimension is the dimension of X taken away from the bigger space in which X sits which is affine space okay. So and of course this so you know this is just doh fj by doh xi at P it is just this matrix okay and where of course the xi's are all affine coordinates of an so X1 etc Xn if you take their coordinates on an okay. So this is the definition of non-singularity and then you know this generalizes this can be used to generalize to say when a point on any variety is a non-singular point and the definition will be that well given any point on any variety it always lies in an open set which is an affine variety because any of any variety is covered by a finite union of open subsets which are isomorphic to affine varieties. So you take the point on the variety to also to be a point on an affine open set containing that variety and then it is then you define the point to be non-singular if it is non-singular as a point of that affine open set okay and so in a way you know this non-singularity should not depend on the open set in which you are considering it so you know the non-singularity or the smoothness at a point is a very it is a local notion okay you know when you say a curve is smooth at a point it means actually there is a sufficiently small neighborhood of the point where it is smooth okay and similarly you know when you say surface is smooth at a point it means there is a small sufficiently small neighborhood of the surface where it has to be smooth. So you see the smoothness at a point is something that is it is not in fact in classical geometry you know if something is smooth at a point then it is also smooth in a neighborhood of that point okay and this is true also in algebraic geometry okay but it requires more commutative algebra it needs it requires more algebra to be to establish okay the fact is that the something being smooth at a point is something that spreads out to a neighborhood of the point alright therefore so the moral of the story is that when you say x is non-singular at a point it will happen not only at that point but also in an open set containing that point alright in a neighborhood of that point and in that sense you know you know that but you know any open set in the Zariski topology is huge okay therefore what it will tell you is that the moment you have one smooth point okay then there is a huge open set there is a dense open set full of smooth points okay and this is what happens in truth what happens is that the set of points which are singular points which are not non-singular that is only a small closed set okay and that means most of the points on an open all points on a huge open set they are all non-singular they are all smooth points okay but anyway these are all geometric facts and one needs to have sufficient amount of commutative algebra to establish them alright so so at the anyway at the level of this definition itself there are problems in the sense that you know this variety x could sit in some other affine space and if I change the affine space then the ideal the ideal of x in the affine space will change okay and not only so that is one set that is one ambiguity and the other ambiguity is for this ideal I am taking some set of generators if I take another set of generators then it may be different number of generators and this matrix changes but I am just saying that so this is a condition on the rank of this matrix okay so there are lot of ambiguities in this definition why should this definition be a good definition because there are so many choices to be made first of all I have to make a choice of embedding x into some affine space namely I have to think of x as an irreducible closed subset of some affine space okay and variety can be embedded in so many ways okay I can think of the variety is a line of course I can think of it as a line in the plane or a line in the space in space in 3 space or even higher dimensional space so there is this ambiguity in embedding it in some an and then once you embed it if you look at the ideal of x for that embedding that ideal of course already depends on the embedding but then it could have different sets of generators okay the same ideal could have different sets of generators and they could and the number of generators in these sets could also be different okay therefore this Jacobian matrix you know which is a Jacobian of all these generators this matrix itself could change okay but then the definition is that you know whatever it is you calculate its rank if its rank is equal to co-dimension then the point is a smooth point and on singular point so it looks like this definition involves too many ambiguous things and you know it might not be very consistent if you change the choices the various choices but that is not true the point is because this is got to do with the this has got to do geometrically with trying to you know look at the tangent space to the variety x at that point okay and that is the reason it is because of that geometric reason that this definition actually works but then if you want to really verify that one uses of course you know you translate from geometry into commutative algebra and the necessity commutative algebra is given by the given by the following theorem. So here is the theorem the theorem is that a point p of x where a point p of a variety x is a non-singular if and only if the dimension of mp by mp squared over k is a k vector space is equal to dimension of O xx is equal to dimension of x okay. So here is the this is the theorem alright where of course where mp inside O xx is unique maximal ideal okay so this is basically this is due to Zariski Oskar Zariski and it is a fundamental result and here you know that the definition of non-singularity we are using is this definition which is full of ambiguities okay this definition is full of ambiguities alright and whereas this theorem gives a condition the condition is only on the local ring of the variety at that point and you know you can see immediately one thing because I am it is a condition of the local ring I can rest I can assume that this point is on an affine variety because I can go to an affine open subset of this variety and look which contains the given point okay so it really does not so you know it depends only on the local ring alright and if I and you know the beautiful thing is that this is intrinsic to the variety see here nobody is bothered about whether x is affine or quasi affine or if it is affine or quasi affine which affine space it is embedded inside I mean all these things one is not bothered about the condition is completely in terms of things which are intrinsic to the variety okay and which do not depend on you know the variety being seen as a subset either irreducible closed or open subset of irreducible closed in some affine space or some projective space nobody is bothered about these things okay so that is the reason why this theorem is very very important and so you know so the proof is interestingly the proof is linear algebra and but actually it is it has got to it has got to do with geometry because what you are trying to do is you are actually you know if you look at it geometry geometrically you are trying to you know calculate the tangent space at the for the variety at that point okay and you are also trying to look at the normal space okay the space of vectors in the ambient space ambient tangent that the tangent space is there at that point for the ambient object and you are looking at the normal directions also okay so you know so you know the idea is that yeah okay yeah that is very important yeah it should thank you should be o xp because all the time I am used to small x and then yeah that is bad notation thank you so this should be o xp this should be o xp not small x right so so you know the roughly geometrically this is what is happening what happens is that when you think of an object embedded inside a space okay then you take a point on that object okay then it will have a tangent space okay and then the normal space will be the normal vectors in the bigger space and the sum will add up to the dimension of the bigger space provided the point is smooth okay so you know if I take a smooth surface in 3 space if I take a smooth surface in 3 space take a point on it and draw the tangent space I will get a tangent plane I will get a night I will get a neat plane 2 dimensional plane and I will get a unique normal to the surface which is given by the gradient okay if the surface is defined by a single equation it will be given by the gradient and then therefore the normal space will be 1 dimensional the tangent space will be 2 dimensional sum of these spaces will be 3 dimensions which will be the ambient space in which the object is embedded but however if you take a singular point this will not happen the singular point the tangent space itself may be everything you may not have any normal vectors for example if you take the cone in 3 space and you take the vertex of the cone which is a non smooth point if you draw the tangent space then you will get the whole 3 space so you will not be able to find at the vertex there are no normal vectors all vectors are tangent vectors there are no normal vectors so this kind of so if you see that this happens because that point is a singular point okay so if you have a smooth point what will happen is that you will get a tangent space at that point which is equal to the dimension of the object and then the normal the normal space which consists of vectors perpendicular to the tangent vectors okay that will give you a subspace of the tangent space of the point in the bigger space in which it is sitting okay such that the sum of these 2 spaces will be the whole tangent space okay and this will happen for smooth points it will not happen for non smooth points okay and actually it is that whole it is that calculation which is being reflected here but it is being reflected using algebra alright so what you will do so let us do the following thing. So without without loss of generality we assume x is a fine okay and why is this correct this is correct because after all my definition of non-singularity is a point p of a variety is called non-singular if you it is non-singular as a point of an open subset which is affine because I have defined non-singularity only for a point of an affine variety and any variety can be covered by affine varieties open subsets which are affine varieties so without loss of generality I can assume that x is affine alright and that means and what I mean by this is you go to an open subset which is affine okay by going to an open affine subset containing the point p okay and you know by going to an affine open set which contains the point p the local ring is not going to be changed because local ring does not depend on whether you go to a smaller open set which contains the point or not okay. So the conditions of the theorem are both the both both the hypothesis and conclusions for the theorem both ways of the theorem they are not affected if you go to an affine open subset okay so it is enough to go to consider x affine okay so say we are in this situation p is a point of x which is inside affine n space and now it is an affine variety means it is an irreducible closed subset of an alright and then you know now we will do some calculations okay so now first of all let p be the point lambda 1 etc lambda n okay p is a point of n space anyway so take its coordinates okay so the maximal ideal of p will be by the null still and sets you know I mean the point p corresponds to unique maximal ideal in the affine coordinate ring of the affine space which is k x1 through xn this is the affine coordinate ring of an alright and here I will get a maximal ideal which corresponds to this point by the null still and sets that will be this maximal ideal generated by the xi-lambda i okay so this is the maximal ideal that corresponds to this point alright and you see what we will do is see you define you define this map k x1 through xn you define this map psi okay into kn so here is my map the map is very very simple take any polynomial g in n variables simply apply I mean take its gradient just take the gradient at p okay which means take dou g by dou x1 at p dou g by dou xn at p just do this so just take the partial derivatives with respect to each of the variables in this order and then evaluate them at p okay so of course here when I say partial derivative it is a derivative that is in the formal sense you know how to write the derivative polynomial without having to get the derivative from calculus methods which involve epsilons and deltas and limiting processes okay so we just use a standard rules for differentiation formal rules for differentiation of variable and you write out these so each of these dou g by dou xi's are all polynomials again in the xi's and then you just substitute the point p namely you substitute instead of each xi you substitute lambda i you will get some n tuple and that is this n tuple of kn okay. Now the point about this map psi is nice thing about this map psi is that it is k linear and surjective okay see the map psi is k linear because you know if instead of g if I replace g by g1 plus g2 okay then all these all you know all the partial differential operators they are all k linear I mean they are all linear and therefore if I replace g by g1 plus g2 I will get a sum of tuples here and if I multiply this g by a constant that constant will come out there okay therefore it is k linear and it is surjective because you know if you take you know if you you see if you take these generators as a maximal ideal okay if I take x1 minus lambda 1 alright for g then I will get 1 0 0 0 0 which is the first basis vector okay because if I differentiate this with respect to dou with x1 I will get 1 and substituting p has no effect it will still remain 1 if I differentiate with respect to other variables I will get 0. So if I take g equal to x1 minus lambda 1 I will get 1 0 0 0 if I take x2 minus lambda 2 instead of g I will get 0 1 0 etc so in this way I will get a basis for kn okay and since the image contains a basis it is surjective okay. So the moral of the story is that psi is k linear surjective surjective psi of xj minus lambda j is equal to standard jth basis vector so it is a linear surjective map alright and now the point is that the kernel is exactly the square of this maximal ideal okay. So in other words what I am saying is that psi induces an isomorphism of the n dimensional vector space k with m mod mp mod mp squared okay so mp mod mp squared is just the n dimensional vector space k. So you see see if we take an element of mp squared okay then it is of the form you know it is going to be of the form sigma i some gi times xi so I know in fact I can put i comma j xi minus lambda i so I put gi j xi minus lambda i into xj minus lambda j yeah. You see what is an element of the square of an ideal it is a finite sum of products it is a finite sum of products where each term in the product is a product of two terms one from one each from the ideal okay. So if I take an element of mp squared it is a finite sum where each term has to be an element of this ideal but you know any element of this ideal will be generated by the xi minus lambda i with polynomial equations so it will look like this alright and now the point is you know to this if I apply the operator doh by doh xl okay suppose I apply the operator doh by doh xl alright then you know what will I get you see if you know if l is so you know I have to differentiate product of three terms alright and therefore I use the product rule alright which also is valid in this case okay formally alright therefore you know see if you know if l equal to i suppose l is i then what I will get is I will get so let me write it here so what I will get is well sigma i, j I will get doh gi j by doh xl at p and then you know I apply p okay. So this is you know this is the lth component of this map xi the map xi is apply all the operators all the partial differential operators and then substitute the point p so the lth component of this map that is this map followed by lth projection that is this map alright and what is it if you apply the chain rule this is you know this will be xi-lambda i xj-lambda j at p alright this is one term then the other term will you know it will have gi j at p and then I will have doh by doh xl of xi-lambda i into at p into xj-lambda j at p and then I will have one more term I will have xi-lambda i at p into doh by doh xl of xj-lambda j at p okay this is what I will get alright and now if you look at it carefully this is going to vanish because you see the first term will vanish because I have these two fellow surviving if this is 0 then I am not worried okay but even if this is not 0 if I substitute xi equal to lambda i or xj when I substitute p I have to put xi equal to lambda i and xj equal to lambda j so this is gone in this you know this will go and in this this will go so these terms will vanish these underline terms so this is so this is so this is actually 0 okay so the moral of the story is that if you take any element of mp squared this linear map will kill it okay so in other words what it will tell you is that mp squared is contained in the kernel of xi okay mp squared is contained in the kernel of xi but the more important thing is mp squared is exactly equal to the kernel of xi okay conversely let h belonging to mp be in a kernel of xi okay now h is in h is an element of mp so h is h is actually given by sigma g i xi minus lambda i over i this is how an element of m the maximum ideal will look like it is just a combination of all the generators xi minus lambda is multiplied by polynomial coefficients the g i is of polynomial coefficients right this is how maybe okay this is g i and this is g i j so it should not they are different things right so h looks like this alright and I am saying that I have taken my h to be in the kernel of xi so xi of h is 0 okay so xi of h what but what is xi of h xi of h is well I have to apply dou by I have to partially differentiate this with respect to each variable and then substitute the point p then I should get 0 that is what it means to say that xi of h is 0 which is h is in the kernel of xi okay I am trying to show that kernel xi is also contained in mp square so I am trying to show kernel xi is equal to mp square that is what I am trying to show okay so xi of h is what if I differentiate this with respect to if I take xi of h, xi of h equal to 0 so this will tell you that you know if I take dou h by dou xl at p is 0 for all l this is what it means okay and now write that out see dou h by dou xl at p 0 is equal to this but what is see if I differentiate this I will get sigma over i okay I will get dou gi by dou xl at p into xi minus lambda i at p plus I will get g i at p into dou by dou xl of xi minus lambda i at p this is what I get okay and what you must understand is that you see this is this is 0 xi whenever xi minus lambda i comes if I substitute the point p it is going to vanish okay so this term is gone alright the and this term if you look at it if l is not equal to i this is 0 so this term will be killed and this term will only survive when l is equal to i when l equal to i this is 1 okay and I will get gi of p so this crazy thing will tell you that gl of p is 0 for all l this happens for every l so gl of p is 0 but that means what it means that the gl see if a polynomial vanishes at p then it is in the then it is precisely in the maximal ideal of p because mp is exactly the ideal of polynomials which vanish at p so what this will tell you is that it will just tell you that you know gl is in mp okay but you know if gl if and that is for all l so you know all these gi's are in maximal ideal and these terms are also in the maximal ideal therefore the product is in m squared therefore h is in m squared so this implies that h is equal to sigma gi into xi minus lambda i is in mp squared so all these things put together tell you that mp squared is exactly the kernel of xi okay why take h in m no if h is not in mp I cannot write it in this form it has to be generated by the generators of mp so you can write h is in this form if and only if it is in mp because xi minus lambda i are the generators of mp a general h need not be like that any polynomial need not be like that what I forgot to tell you is that you see this psi is from this polynomial ring into this mind you every ideal is also k subspace okay so this psi the psi also gives you a linear map and restricted to a subspace so psi can also be restricted to mp and so in fact what I want to say is that psi restricted to mp itself is also k linear and subjective okay so I come to that so you know note that mp squared is a subspace it is a k subspace of mp which is also a k subspace subspace means sub vector subspace and there is this map psi into kn okay and in fact you see psi restricted to mp is also subjective because you know I have already told you that psi takes all the generators of mp into the standard basis so psi restricted to mp is also subjective okay so psi restricted to mp from mp to kn okay so here of course I am looking at as you said because I am taking h in mp I am looking at the kernel of psi restricted to mp right so if you take psi restricted to mp has induces is a subjective and induces I mean it is subjective because the generators of mp give their images under psi give you the standard basis right and it induces a k linear isomorphism isomorphism from mp mod mp squared to kn and let me call this isomorphism as psi prime okay so you know the first this simple linear algebra in this first step gives the calculation that you know you take a point p in affine space okay take the maximal ideal corresponding to the point then mp mod mp squared is just n dimensional affine space okay that is the calculation alright and you know if you look at it in fact in view of the in retrospect you know in view of the theorem that we are going to prove what you are saying is that every point of affine space is non-singular okay because you know if you take the local ring of affine space at that point is just the polynomial ring localized at mp okay and it is maximal ideal is generated by the image of mp in that localization and if you take and what you are saying is that well if you take this quotient and localize okay you will get mp mod mp squared at the local ring okay for the point in affine space and you are saying that the dimension of that is equal to the dimension of affine space which is n so you know if you take x equal to an you have got this condition you have got this condition and what you are saying according to the theorem is that every affine space is smooth okay so the significance of this calculation is that affine space is smooth is non-singular every point of affine space is non-singular okay affine spaces are smooth right so well anyway I need to go down to this I need to go down to this subset okay so well see for that I do the following thing you see you have see p is in x and that is that is the same as saying that Ix is contained in mp okay this is again you know this is just applying the script I and the z functors to p and x right and using the NUSH transients so you know p see p is in x so you are saying the subset p is subset of x okay you apply i to this so you will get if the point p contains if x okay but what is if what is the ideal of the point the ideal of the point is exactly mp so mp contains if okay so this is just applying the functor I mean it is it is here it is gotten by applying the getting the ideal of a subset okay and and of course the ideal of point is this maximal ideal mp and so you know what will happen is that if you take if you take Ix so Ix is inside mp right and in fact if you take Ix plus mp squared okay that will also be a sub of mp okay because mp squared is a sub of subspace of mp Ix is also a sub of mp the sum of two subspaces is again a subspace alright and in fact I can even divide now by mp squared so I will get I divide by mp squared and I get this this is subspace I say subspace of this k vector space which is isomorphic by psi prime to kn alright the beautiful thing is if I take the image of psi prime and image of this under psi prime you know what I will get what is of course psi prime is an isomorphism and this is subspace so the image of the subspace under psi prime will be a subspace of kn okay and that subspace what is the dimension of the subspace what is the dimension of the image of a linear map it simply you take you simply take a set of generators for the subspace okay and apply the linear map the image of a subspace is just given by taking the span of the image of images of the generators of the subspace okay so if I want to take the if I if I take the image of if I want the image of this I would just take a set of generators for this apply psi prime to that and take the span of that okay in other words of course instead of taking span of a set of vectors you can write those vectors in column form take the matrix and take the rank of that matrix because you know the rank of the matrix will give you the dimension of the image okay this is the standard rank and nullity theorem in you know linear algebra so it is part of that theorem so you know but you see now you know for I X if I take a set of generators F1 through FL of I X okay then those generators will also give me generators here okay and applying psi prime is actually calculating this Jacobian matrix because actually what is happening is when you apply to each generator okay what you are doing is you are taking you are going to get one you are going to get one row or one column of that depending on the way you write it of the Jacobian matrix so if you take the images of all these generators I am so I am just going to get to Jacobian matrix and each rank is precisely going to be the image of this okay that is the connection so moral of the story is dimension over K of I X plus MP squared by MP squared is equal to dimension over K of psi prime applied to that I X plus MP squared over MP squared okay and this is equal to rank of the Jacobian of doh FI by doh XJ at P okay where F1 F2 etc up to FL are a set of generators for the ideal of X as here so this is how you know the you see now if you look at it like this it is very clear that no matter what set of generators I use instead of using F1 through FL suppose I use some G some H1 through H L prime another set of generators no matter how many generators they are I do not care okay but in any case I am only going to get dimension of this subspace and that is the reason why this this the rank of this Jacobian does not depend on what your generators are okay you are always going to get only dimension of this subspace okay that is the reason why even for a fixed embedding for the ideal of the variety in the affine variety in the embedding into affine space even if you change the set of generators you are going to when you calculate the rank of the Jacobian of the generators with respect to the variables you are going to get only the same dimension I mean you are going to get only the same rank because is a dimension of one and only one subspace this subspace under this isomorphism okay fine so alright now it is a matter of just a little bit more of linear algebra alright so what you do is so let me keep this statement here as it is and continue here okay it is only probably a couple of lines more so what will happen is you have see what you should remember is you see mp mod mp squared divided by ix plus mp squared mod mp squared okay I have this this is a space and this is a subspace if I take the quotient my claim is that this is the same as mp mod mp squared where this mp is unique maximized in the local ring of x at p alright so you see we have an isomorphism of k vector spaces in this way okay because how do you get the local ring you get the local ring because the local ring of x at p is gotten how what you do is you take the affine coordinate ring of x and you localize it at the maximal ideal that corresponds to the maximal ideal that corresponds to the point p inside x you know this is the definition of the local this is how you get the local ring of an affine variety at a point you simply take the affine coordinate ring of the variety and you localize at the maximal ideal that corresponds to that point the unique maximal ideal of the affine coordinate ring which corresponds to that point by the Null-Strauss and such okay but what is but what is Ax it is just polynomial ring you go model of the ideal ix and then what is this maximal ideal this is just mp you localize at mp mod ix alright and because of this if you take if you take this quotient mp mod mp squared ix plus mp squared mod mp squared this will be the same as small mp where small mp is the is the unique maximal ideal in this because you see small mp is actually this you take the ideal mpx dot take the ideal generated by this by its image in oxp okay so this quotient is the same as this alright as k vector spaces and therefore if you count dimensions dimension of this minus dimension of this is equal to dimension of this so you will get you will get dimension of mp mod mp squared over k is equal to dimension of capital mp mod mp squared over k minus dimension of ix plus mp squared by mp squared alright and but you know so that will be equal to what is this this is n dimension because you have already proved mp mod mp squared is isomorphic to capital mp by capital mp squared is isomorphic to kn as vector spaces therefore its dimension because of the isomorphism psi prime its dimension is n so you see this is n okay minus this part is just the rank of the Jacobian matrix at p so this is this as we have seen here is dimension of ix plus mp capital mp squared by capital mp squared is actually the image under psi prime dimension of the image under psi prime of that subspace okay and that is a rank of the Jacobian. So I will get n minus here I will get n minus rank of the Jacobian of dou fi by dou xj at p I will get this okay and what is and this is equal to n minus n minus dimension of x if and only if p of in as a point of x is non-singular that is our definition of non-singularity the definition of non-singularity is that the rank of the Jacobian should be equal to the co-dimension for a point which is a smooth point a point which is a non-singular point. So this is equal to n minus n minus of dimension of x which is equal to dimension of x if and only if the point p is non-singular so that proves the theorem so what you have proved is dimension of m mod m squared mp mod mp squared where mp is the local ring mp is unique maximum ideal in the local ring corresponding to the point p on the variety x that is equal to the dimension of x if and only if the point p is a smooth point okay and the importance of the importance of that is that this gives this gives the tangent space at the point this gives you the dimension of the tangent space at that m mod m squared okay dimension of m mod m squared that actually gives you the tangent space at the point okay and therefore you are saying a point is a smooth point if and only if the dimension of the tangent space is equal to the dimension of the variety on which it lies okay in general what will happen is a dimension it is not a smooth point this dimension can go bigger you can get more tangent vectors your tangent space can become bigger if your tangent space becomes bigger then if you calculate the rank of the Jacobian that will become smaller okay and the point will be a non-singular point I mean the point will become a singular point okay so the moral of the story is that this gives you the tangent space okay and this actually calculates the normal space this is a set of normal directions at that point in the ambient space in which this space is this object is sitting inside okay. So if the point is a smooth point and the x is an r dimensional variety in n dimensional space then the dimension of the tangent space will be exactly r the dimension of the normal space all the vectors which are perpendicular to the tangent space tangent vectors that will be n-r and that should be equal to the rank of the Jacobian for us any set of generators and that is what geometrically is happening and that is what it says okay so that finishes the proof of this theorem so in this connection I need to tell you that the set of points where the where the set of singular points that is a close subset okay. So that is also a fact that can be seen immediately from this argument if you look at all those points where you have which are singular points see those are points on the variety where this rank falls rank of this matrix falls okay and you know where the rank of a matrix falls is just given by the locus of the vanishing of the of all the minors okay of all the maximal minors right. So if you have if you take this Jacobian matrix okay it is a matrix having polynomial entries alright and you know if you vary the point p what are the points p which are singular those are the points for which if you evaluate the Jacobian matrix you are going to get rank less than you would not get rank n-r okay but you will get lesser rank okay and what does that correspond to that corresponds to you take all the maximal you take all the maximal minors square minors of the matrix they are determined should vanish if all the maximal square minus mat vanish at that point the rank has to fall and only at such points the rank will fall and therefore that is a close subset okay. So this argument tells you that if you look at all the points p which are singular points that is a close subset of x that will be a close subset of x okay so the set of singular points is a close subset but the more important fact is that this is this close subset is by no means the whole space it can only be a proper close subset which means it will be of smaller dimension and that needs proof and I will prove it in the next lecture.