 Okay. Yeah, thanks. So I will, yeah, so I will get started with the second lectures. Just share this screen. Okay, so, so the goal of. So in today's lecture, I just want to sort of continue with the general algebraic theory of quadratic forms over fields fields of characteristic not to sort of isomorphisms between them and we're, you know, we're going to use some of these tools later in the in the course to study invariance of quadratic forms on housing and Kowski theorem. And so what I want to start off today with is the following is a really fundamental theorem due to bit. And so the theorem is the following. Right, so I guess I should say, well throughout the lecture, we're going to fix a field F where the characteristic is not equal to two. Okay, so the theorem is the following. So, let V, comma W, comma Z be quadratic forms over F. Sorry, so this theorem is due to fit. It's called the cancellation theorem that's cancellation theorem. And the theorem is that, well, let's suppose that V direct some Z is isomorphic to W direct some C. So as quadratic spaces. And then the conclusion is that V is just isomorphic doubly. So it's saying that you can, you can sort of cancel isomorphisms. So if two things become isomorphic after you add the same some and then they're actually isomorphic to begin with. And this is the type of result which is generally not true like, you know, in, there are lots of mathematical objects where you can do this kind of thing you know you want to study whether you want to be or isomorphic. And there's a weaker question which is, well what happens if a plus C is isomorphic to B plus C. So that's some sort of stabilized isomorphism question. And usually the stabilized isomorphism question for various reasons is a lot easier to understand and stabilize morphism process of most types of mathematical objects are much easier to understand than isomorphism classes. So, then this theorem is kind of remarkable which is saying that in the setting of quadratic forms, there's sort of no difference between these two concepts. Right. And so just also a caveat is that when I say isomorphic I mean there exists an isomorphism so that the isomorphisms that you have here are not necessarily going to be related or not going to be related. Right. So, so what's sort of special, you know, so what makes us work for quadratic forms that doesn't work for, you know, for other costs of mathematical objects. It's that quadratic forms are as as mentioned last time they a key feature of quadratic forms is that they, they have lots of symmetries. So with that in mind let's, let's prove the theorem. Okay, so first of all, so Z is diagonalizable. So as we saw last time. That means it's a sum of one dimensional forms. So by induction. Well, by induction induction on the dimension of Z we can assume that the Z itself is one dimensional. So we can assume the Z is the one dimensional form brackets a. So, in other words, spanned by a vector whose product with itself is equal to a. So, suppose, an isomorphism from be direct some brackets a. So it's called as isomorphism T to W direct some brackets a. And what we want to do is we want to cook up an isomorphism from B to W. So to do this let me let me give these vectors name so let's let's say you one is a basis element in brackets over here, and F one is the basis element in brackets a over here. So the basic problem is that. So, in general, T is some isomorphism like this. And in general it's not true that T of E one is equal to F one. So if T of E one were equal to F one, then you could say well you've got an isomorphism that takes E one to F one. And therefore it takes orthogonal complement of E one to the orthogonal complement of F one. And that would just be saying that T carries begin to W. So if T of E one equals F one, or in fact you don't even need TV one to be equal to F one for example it could be minus F one for this argument to work. Then T carries. V which is. So, which is you want sorry. E one perp into W, which is F one perp and induces the isomorphism we want. So the problem is that you have some isomorphism but it doesn't actually direct respect the direct some decomposition so it doesn't have to carry E one into F one. So so that's that's that's well in general in this type of thing that's that's why stable isomorphism is going to be a weaker relation than isomorphism. But so what sort of saves us in this in this quadratic form setting is that you have lots of isomorphisms and even though T of E one is not equal to F one what you can always do is you can pre compose a post compose with some other automorphism to make this to make this happen. So, what you can do is that observe now that T of E one and F one so we can think of those as vectors that they live in the vector space w direct some brackets a. And this is times F one are two vectors of the same length or the same length squared. And now there's this, this wonderful lemma that well I guess it was it was in the exercises from last time, but I want to sort of stated over here. So whenever you have two vectors with this property to that person in quadratic space such that they have the same lens with the same self product, then they're carried. There's an automorphism of that quadratic space that carries one to another. So, so let me state this as a proposition I'll prove the proposition but first I will explain how the proposition applies theorem. So proposition is that if V comma q is any quadratic space, then the orthogonal group of V comma q acts transitively on the set of all vectors V and V such that V dot V is equal to a for any a not zero in the field F. So, so you have lots of you have lots of symmetries of any product space in fact you can go from I mean sorry so I mean a basic example of this is if you have like the Euclidean space and then this is saying that your orthogonal group of the Euclidean space is going to act transitively on the n minus one sphere. So this is a generalization of that to any quadratic space. So I'll prove the proposition a second but now let me explain why the theorem follows. So, apply. So apply the proposition to the vector space V. Sorry, not to W direct some brackets a contain an automorphism you which lives in the orthogonal group of W direct some brackets a such that you have T of E one is equal to F one. And then you observe that you composed with T is an isomorphism from the direct some brackets a to W direct some brackets a carries E one into F one and hence induces an isomorphism from the into W. So that's that person here. Okay. So once you have enough automorphisms to act transitively on these generalized fears then that proves the result. Okay, so now let me explain the proof of this proposition so this again this was on the the exercises for for yesterday. And the idea is that you're going to use a reflection, the idea is that you're going to you're going to use a suitable reflection to go between any two vectors except you might have to multiply by plus or minus one as well. Okay. So for proof, let's suppose I have two vectors so suppose the common W are our vectors in this quadratic space, such that V dot V is equal to W dot W, which is equal to a, a some elements of my, my ground field which is not zero. And so what we want is some, some element T and your talk in all group of this quadratic space such that T of V is equal to W. And so what we're going to do is we're going to consider the reflection reflections through V plus or minus W. So, consider the following two vectors. So consider V dot, sorry. Consider the vectors V plus or minus W. And the, the observation is that because V dot V is equal to W dot W. We have these two vectors are orthogonal so V plus W dot with V minus W is equal to zero so V plus W is orthogonal to V minus W. So as a consequence, either, well if you have two orthogonal vectors, and they're both isotropic, then their sum is also isotropic. So, so as a consequence, either V plus W or V minus W must be an isotropic. So if, if they were both isotropic then V itself would be isotropic, or to V would be isotropic which, and let me suppose for simplicity that V minus W is an isotropic. So I'll explain how to handle the other case. So then, consider the reflection through the vector V minus W so you're allowed to reflect well so you're allowed to reflect through any vector which is an isotropic. It has to be an isotropic because they're dividing by that vector data itself, and observe that this reflection through V minus W will sort of by construction carries V minus W to W minus V. So it's the identity on the line through the minus W, sorry, it's minus one on the line through the minus W, and it's the identity on the orthogonal complement so carries W carries V plus W to V plus W. And hence, it carries V into W, as desired. So, so this work, so this reflection works great, so I could have even just started with this comment. So if, if V minus W is anisotropic, so then you just take the reflection through it and then then you have your final transformation. If, if not you have to work a little bit harder, but if V minus W is an isotropic is isotropic, then V plus W is an isotropic. So if V minus W is isotropic, then V plus W is an isotropic. And you compose the reflection with minus one. So if V minus W is an isotropic, then V plus W, then you have to add an extra sign at the end, but that's fine, because the sign is also the orthogonal group, it's multiplying by a scale. Okay, so that's improved with the proposition and then that, which we're just telling you that you have this transit of action and then that's also the proof of its cancellation. So, so if it's cancellation theorem, and also a sequel which I'll explain a little bit later, is a really powerful tool for trying to understand isomorphism classes of quadratic forms and it helps. And in particular one of the key features that it has is that it lets you write quadratic forms in sort of a normal form. So, right, so if it's theorem is fundamental, because it enables one to write quadratic forms and sort of a normal form. And it also means that, so in general, like, so as I mentioned, so studying stable isomorphism classes of objects when you consider isomorphism systems after you've added or direct something on some other other term is a lot easier. And that's because stable isomorphism classes of objects can often be organized into a billion groups I mean this is sort of the starting point of, of K theory, K zero. And in the case of quadratic forms this leads to a structure called the growing date that ring which is going to come up next time. And so this is sort of like the commutative ring you can write it down and you can study it in various ways. And what that saying is that, well what that cancellation is saying is that you can actually encode the isomorphism class of quadratic forms. Okay, so, so we're going to come back to that. But also sorry first before I want to move by just also want to say just a comment about the exercises so I have to apologize so in. So in the exercises for yesterday. You proved you proved this proposition. And I also asked you to prove the cartoon do today theorem, which, which is that any, any element of the orthogonal group is a product of that most and reflections where and is the dimension. And so, well, you should definitely try to that's a theorem and it's, it's, it's completely true. It's still a little bit harder than what I intended to ask which is really just that it's generated by reflections. And so, so the fact that it's generated by reflections is something that you can prove directly from this proposition. There's, there's some more, more elaborate argument to show that you can take out most and reflection so so there's going to be in today's today's exercise set, which is already posted in SoCoCo. It sort of walks you through the argument to prove the stronger form of cartoon to do though in fact that's the precise quantity of form is actually not going to be used later but yeah. Okay. So that's the general theory of quadratic forms. And in particular I want to explain sort of a particularly fundamental example of a quadratic form which we sort of saw yesterday, but which is really central to the theory. And that's a hyperbolic plane. So hyperbolic forms. So the hyperbolic plane is the following quadratic form. So it's a quadratic. So it's going to specify by a symmetric matrix, and it's a symmetric matrix 0, 1, 1, 0. So it's just two dimensional quadratic form. And while you can think of it as follows it has a basis. So it's two dimensional has a basis given by vectors e1 e2 which are isotropic and such that e1 dot e2 is equal to one. So by the way e1 dot e2 equals one, you can replace one by any other scaler any other non zero scale is rescale e1 or e2 appropriately. But the key point is that it's generated by two isotropic vectors that are that have a non trivial inner product with respect to each other. Yeah, so by the way, you can also write this as a quadratic form one zero zero minus one is a quadratic form x squared minus y squared. Okay. So, so this is a hyperbolic plane. And the basic like so the one of the basic reasons this is so fundamental is the following proposition. So, if V is any, any quadratic form over F which is which is isotropic. So V be isotropic quadratic form. Then what you can always do is you can always split off a copy of the hyperbolic plane. So, so then V is, V is isomorphic to a hyperbolic plane direct some v prime for some smaller quadratic form v prime. Well I guess I should give the hyperbolic plan a name I'll call it H2. Whenever you have an isotropic form, you can always split off a copy of the hyperbolic plane and some smaller form. And in fact, so by the bit cancellation theorem, this v prime is actually determined up to isomorphism. So what's the proof. Whenever the top of the company may be assuming to be non is in the nature. Thank you. Yeah, so when I say quadratic forms I'm only going to consider non degenerate quadratic forms. And yeah, also the field is characteristic not not to. Thanks. Okay. So, so how do we prove this. Well, right. So, so if V is this quadratic form we can let little v so let lowercase v be an isotropic factor. And not zero. And then we're just going to sort of build a hyperbolic plane sitting inside of the so well. So by assumption or quadratic form is known to generate so we can find a vector w in the such that w dotted with fee is not zero. And then basically you just sort of have to normalize it so then basically v and w are going to span a hyperbolic plane. And well so so w doesn't have to be isotropic, but basically you replace w with w minus lambda v for some lambda in the field f. And if you just make some shearing automorphism then, then, then you can normalize so that you have two isotropic factors such that their inner product is not zero. And then you've got a hyperbolic plane. So this gives a hyperbolic plane sitting inside v. And then you can take the prime to be the orthogonal complement. Right. So, so this is really useful because it's saying right so it's saying that whenever you have an isotropic form so me standard example but isotropic form is just a plane. And saying that whenever you have an isotropic form, you can always sort of peel off a copy of the hyperbolic plane and you're left with something smaller. In fact you can, you can just keep doing this you can keep doing this can just keep peeling off copies of the hyperbolic plane, until you're left with something and isotropic something with no non trivial zeros. And that's actually well by again by that cancellation is going to be determined up to some words. So a corollary is that any quadratic form is going to be isomorphic to a cop some some direct sum of copies of the hyperbolic plane. Plus, some other quadratic form B prime Q prime, where B prime Q prime is an isotropic, and it's determined up to some words. So again by that cancellation is determined up to isomorphism. So if you want to classify quadratic forms up to isomorphism over any fields. So this suffices to classify the anisotropic quadratic forms, because every quadratic form is some some cop some direct sum of copies of the hyperbolic plane plus something and I saw. There's a question in the chat. All right, so the question is what is an isotropic form. An isotropic form means a form that has a vector with zero, zero length. So, yeah, so let me just write that. So recall. If there exists some element of V some vector, some non zero vector with V dot equals zero, and it's an isotropic otherwise. So if your goal is to classify quadratic forms up to isomorphism which is, which is an important goal in the subject, then it really suffices to work with an isotropic forms. And another observation is that basically, like, hyperbolic forms look the same over any field, and for any question involving them is very easy to answer. So it's really the anisotropic forms that have that sort of interesting to understand. Okay, so this is sort of the normal form of quadratic forms that I'm saluting to that that you can always. Yeah, that that anisotropic forms are the building blocks and the ones that you want to sort of understand up to isomorphism. So, yeah, so another sort of observation. That's really a consequence of that's cancellation theorem in this type of machinery is sort of a general observation in the theory of quadratic forms. So, as I mentioned, like if you're studying quadratic forms there's sort of at least three natural questions that you can ask a quadratic forms over a given field. The first question is, you know, so when if I have two quadratic ones when are the isomorphic, what can I do to distinguish them, receive their isomorphic. The second question is when does a quadratic form represent zero so when is a quadratic form anisotropic. And the third question is, when do, when does a quadratic form represent an element of the field so when is there a vector, be with V dot be equal to some elements of the field. And turns out the really the most powerful question and all of this is the question of when is a quadratic form isotropic. So, the really fundamental question that you can ask over a given field is that is to have a tool for determining when a quadratic form is isotropic. And so, for example, this is the form that the, the Hasan Minkowski theorem states. So, fundamental question was a fundamental question I mean fundamental out of the three questions I mentioned. Okay, so. So, I guess what I mean by that is that suppose you have like an Oracle, you have an Oracle that will tell you so you're working over some field and you have you have an Oracle that will tell you when a given quadratic form is isotropic you handed a quadratic oracle tells you whether or not it's isotropic. Then, you can classify quadratic forms over the field sort of inductively, at least. So, in particular, sorry, so when I say the quote, quote, fundamental question I mean that this is a stronger question. Okay, so, so why is this really the fundamental question. So, suppose, become a queue well suppose you have some sort of Oracle or machine that will tell you when quadratic forms are isotropic. And now let's suppose that become a queue, and w comma q prime, or quadratic forms. And you want to know if they're isomorphic. So how do you do this. Well first you have your Oracle so you can ask our V and w isotropic or an isotropic. So if V and w are isotropic, then write them in the normal form. So they're hyperbolic planes plus an isotropic. And now if V and w are isomorphic then the anisotropic parts are isomorphic and the number of hyperbolic summands have to be isomorphic. So V is isomorphic to w if and only if the anisotropic parts are isomorphic. And the number of hyperbolic planes well okay so I guess you know the dimensions so if the anisotropic parts are isomorphic then they have to be isomorphic because hyperbolic parts have the same dimension. So you may as well assume that V and w are anisotropic if you want to have some sort of machine for determining when they're isomorphic. So this is nice sort of trick. Well, if you want V to be isomorphic to w, then any length squared and V so any, any, any scalar that's represented by V has to be represented by w as well. So pick V and V and let a to be little V dot V by construction V direct some minus a is isotropic. So question is w direct some minus a isotropic. So if not, then V is not isomorphic w. But if yes, what you do is you split off hyperbolic planes from the direct some minus a and w direct some minus a and ask if they're isomorphic and ask if the complements are isomorphic. And this is actually a, well, and then you sort of recursively continue this, this process, and this terminates because at each stage you're lowering the dimension I mean this is the complements are going to be dimension less than, less than V. So inductively determines determines when V and w are isomorphic. Yeah, so I guess maybe it won't spend too much time on this but I think that the rough takeaway is that really the most powerful question in this business is, is whether you can tell when a quadratic form is isotropic. And sometimes there are a bunch of results. So for example, like, it's often easier to prove the quadratic forms are isomorphic, or to have a tool from showing that they're isomorphic then to prove whether a quadratic form is isotropic. And so often there's sort of two different forms of various results and the version involving when you can solve the equation when something is isotropic is strictly stronger statement. And that's because of this, this type of procedure. So there's a question. What does it say between inductively and when. Right, sorry, so it's saying you want to understand when V is isomorphic to w. And, well, V is isomorphic to double if V is isomorphic to w will then V direct some minus a is isotropic if and only if w direct some minus as isotropic. And then if and only if the anisotropic parts of V direct some minus a and w direct some minus a are isomorphic. So if you have reduced your question of when or to anisotropic forms isomorphic to a question about anisotropic forms in smaller dimension. Using this sort of procedure. What does it say after if and below. Okay, so, maybe instead of, well, maybe I can put this on the problem set or something to think about it more, but I really just want to work through it in. I just want to work through this in an example, and just really see how how this sort of helps us. And the goal, I just want to be really concrete here. The goal is, let's classify quadratic forms over a finite field. So, so maybe rather than trying to, you know, make this very precise and we can ask like what what what do we really mean by an algorithm and so forth. I just want to sort of illustrate this by saying, well, let's let's classify quadratic forms over a field. Application, well, the application of this idea. Let's classify quadratic forms over F over a finite field FQ. So we saw last time that you can classify quadratic forms over C they all the same if they're given dimension over our where we'll basically have the dimension and the signature. And so now we want to do that the other case that's that's simplest which is the case of a finite field. But I think it'll be sort of maybe fun and also relevant to point out that this these methods are actually more general than the case of a finite field. So they will apply in some other examples which maybe will not be used in the course but are quite interesting. And, well, more, so I'm actually going to try to formulate this a little bit more generally. So, what we're going to do is we're going to, well, kind of make the following definition, which is. So, given a field F. The U invariant of F is a dimension. It's a largest dimension of an anisotropic form over F or infinity if if there's no, there's no largest dimension so for example if you're over the real numbers and the U invariant is infinity because x one squared plus data dot plus x a million squared is still an isotropic. So in many fields once you have a quadratic form in enough variables that it automatically has a solution. And that's called that's called the U invariant. And so what we'll be interested in is so so the U invariant is I guess it's some sort of measure of the complexity of some form, at least as far as quadratic forms are concerned of the field. Because if you if your U invariant is small, then basically the classification of quadratic forms is somehow reduced to the classification in small dimensions because because once the dimension gets large enough, then you can sort of keep splitting off on hyperbolic hyperbolic forms. So, roughly, the smaller the U invariant, the easier it is to classify quadratic forms. And I think I put some stuff on about the U invariant on the on the exercises the U invariant is actually is actually connected to quite a bit of recent research in the, you know, in there in in the theory of quadratic forms. There are a lot of sort of open questions involving involving the U invariant. So we're not really going to need most of what we're not going to touch that but what I want to do is is is is just sort of me just uses a specific guide. I mean, if the U invariant is small then it's easier to classify quadratic forms will roughly because if you can show that lots of things are isotropic then it's easier to classify quadratic forms. And in particular, I want to start with the following theorem, which is that any quadratic form of dimension three or dimension at least three over fq is isotropic. So, the U invariant is is equal to two. Okay, so you have to check something which is that there is a binary anisotropic form. But, well, okay, so right, come back to that, but in particular once you have at least three variables there's a root. So once you have at least three variables and a quadratic form over over a finite field. There's always a solution. So this is a basic fact which is kind of why well it's going to tell us that the classification of quadratic forms is also very simple over a finite field. And in fact, this classification is going to work over any field where the U invariant is equal to three. I'm sorry, equal to two. Okay. So let me prove this theorem, let me prove that any quadratic form of dimension three over fq is isotropic. And, well, we can always diagonalize for quadratic form. So we can assume the form is brackets a comma b comma C where we have some non zero elements of the field fq. And so what we want is a non zero solution to the equation. A x squared plus b y squared plus C z squared equals zero. So what we want is an is a root of this quadratic form is an isotropic factor. And that means we want a non trivial solution of the equation x squared plus C y squared plus b y squared plus C z squared is equal to zero. Okay, so in fact what we're going to do is we're just going to take z equals one. So that, that what we want to solve is a x squared equals minus C minus b y squared. And now we're just going to make accounting arguments. So now we're going to make accounting argument. Well, we observe that the left hand side only depends on x and the right hand side only depends on why. So we have a function of x and a function of why. And we just want to show that their ranges overlap. So, left hand side depends only on x. Left hand side depends on why. And we want to show that the two functions have ranges that overlap. So observe that if you look at the collection of all numbers all numbers in fq of the form a x squared as x ranges over fq. This set has cardinality q plus one divided by two. So why is that? Well, you first let x range over all the units in fq and then squaring us two to one on the units of fq. So a x squared is x ranges over the units. It's going to have size q minus one over two. But then you also have zero. So then that adds one. And similarly, the collection of all minus C minus b y squared as y ranges over fq has size q plus one over two. And so that means there has to be an overlap because you have two subsets of fq that each of which is greater than, I mean, so they can't be destroyed because otherwise they we have more than pigeonhole principle. So by the pigeonhole principle, you can solve the equation. And so that tells you that you can you can find you can find an isotropic vector in your three dimensional quadratic space. Yeah, so this is kind of a fun argument. So in fact there's a that's what the univariant is at least two that doesn't prove explicitly that you can't have an anisotropic form of higher degree does it. Um, okay, yeah, thanks. That's, yeah, so I should, you're right, I should say a little bit more so. When we have the highest degree of an anisotropic form is one. Right, so I should, I should, you're right so. That's that you can construct one that is all that is anisotropic for the great to or something. Right. Right, so you can do that because not every element of fq is a square so let you, well, maybe I shouldn't call it you. Let me be an element of fq cross, which is not a square. So then x squared plus V. Sorry, x squared minus the y squared equals zero has no has no solution q equals to I'm always I'm only working in an odd in characteristic greater than two. Yeah. Yeah. So, right, so not every element is a square so you can write down a binary quadratic form that doesn't have any solutions to the un variant is at least two. But if you have a quadratic form in, you know, five variables then it has a subform of three variables. And so, so then, then you get an isotropic factor. So, yeah, yeah, thanks for putting that out. Okay, so there's a question. Right, so why is the size. Okay, so why is the size of ax squared that the collection of all ax squared as x ranges over fq. Why is that, why is that size exactly q plus one over two. Well, let's let's count. So first of all, let's count the cases where x is not zero. So if x is not zero, then the function of squaring is exactly two to one. I mean, the pre image of everything in the image is exactly two points, plus or minus. So so the non zero squares are our carnality q minus one over two. And then you have zero. And so if you add both of those, then you get q plus one or two. So, yeah, so then that also applies to the other case. Okay. Right, so. Yeah, so also, I'll stick around after the, please ask questions now as well but I will just stick around after this. Okay, so there's actually a more general so this is sort of a fun and direct argument but it did use a diagonalization of quadratic forms so it doesn't necessarily apply if you have forms of higher degree, but there's actually a wonderful results that there's something like this in for higher degree polynomial equations as well. And so this is something that you'll explore a little bit on the exercises, which is that. So more generally, finite fields are what's called C one, meaning if you have a homogeneous polynomial. So if you have a homogeneous polynomial p of x one through x n is homogeneous of degree D in n variables. And n is greater than D. So they're, they're more variables than the degree. Then, then there's a non trivial zero. So there's a there's a zero of this polynomial which is not just a zero vector. There's a non trivial zero in the finite fields. So any field that has this property is called called C one. And so actually, I think, so another example, I mean, there are other examples of so in particular C one field is something that has property that any like ternary quadratic form is is a topic. And there are many other important examples of C one field so it's a little bit on. And the problem set so something like the rational function field of the field of rational functions in one variable over the complex numbers. And yeah, so. Okay, so these are sort of particularly simple fields for many of these types of like comological and arithmetic questions if any solutions to equations. Right, sorry, so there's a question of why is it see one and what's the one. Well there's a version of this for this, there's a notion of CR field. Yes. Yes, so it's answered in the chat where you have. I'm just saying that you have more variables and we replace that by d to the k. So, so you need an even larger, they're weaker conditions where you need even more variables to have a root. Okay. Yeah, so, so for example, fq is, is C one but something like a Laurent series field fq Laurent series T is going to be C two. And there is sort of this for the hierarchy of fields that goes up and becomes sort of harder and harder to solve the equations over them. Okay, so I just have a few minutes so what I want to do is I want to explain how you can get the classification of quadratic forms over over a finite field or more generally over any field of where the un variant is is at most two so the binary forms right isotropic. So now, let's classify quadratic forms over fq, or really over any field of you invariant at most two. So, for example, this will apply to any C one field. So if you have a quadratic form and you want to understand what it is up to isomorphism, then well the first invariant is the dimension works over any field. But there's, there's another invariant that you can write down pretty easily which is called the discriminant. So, let's make the following definition. v comma q be a quadratic form. And this is going to work over any field f. So, the discriminant is defined. Well, first you choose a basis. You choose, choose a basis E I of the, and the discriminant of the comma q is exactly the determinant of E I dotted with E J. So, you choose a basis, and then you get a symmetric matrix right so so as last time quadratic form is given by can be specified if you've chosen a basis the inner product is specified by some symmetric. symmetric and by and symmetric non singular matrix. And so you can take the determinant, and that's the discriminant. But the caveat is that well right now so this is not really well defined because, well for example if you scale the basis if you just scale all the basis vectors by, by five, then, then you're going to, you're going to scale this. So, right. Sorry, so I guess what you should do is you should consider this is living in f cross modulo f cross squared. So it's going to be indeterminate, it's going to be independent of the trace of basis. If you, if you make this live in f cross modulo f cross squared. And that's that's basically because I mean if you think about how the symmetric matrix is determined so so any quadratic form gives you a symmetric matrix. And when you change a basis then you multiply the symmetric matrix by some matrix on the left and by its transpose on the right. And so if you think about what that's doing to the determinant it doesn't fix the determinant, but it multiplies the determined by a square. So this determinant is is not something that's that's independent of the choice of basis. But if you change the choice of basis you always change this determined by a square. So this is a well defined coset in f cross mod f cross square. And, right, so for example, if f equals, so if f equals fq, then f q cross mod f q cross squared is is a mod two. So you'll get it if you consider the discriminant if a quadratic form over over a finite field you'll just get a sign plus or minus one. And so, yeah, so let me state the proposition proposition over a field of you invariant, at most two. So I every ternary form is isotropic, then quadratic forms are classified by the dimension, which is, which is a positive integer. And, and the discriminant. So that's the that's theorem. Okay, so what's what's the proof of this. Right. So, I guess basically the proof is that if you want to classify quadratic forms, then you may as well classify quadratic forms of dimension at most two. Because if you have a quadratic form of dimension greater than two, then you can split off a bunch of hyperbolic pieces. And, well, if you do that you'll change the discriminant or the change the dimension by multiplying by subtracting some even integer and you'll change the discriminant by a sign. So without loss of generality, you can assume that you're working in dimension less than or equal to two. So what you can do is well suppose V and w are quadratic spaces of dimension to so dimension one is easy so let's just say it's dimension two with the same discriminant. And then what you need to show is that then V is isomorphic to W. Okay. So, in fact, what you can do is you can show is that if D equals the discriminant. And then V is isomorphic to brackets one comma D. And then so is w. And so why is this will the fact that you're working with you invariant at most to is going to tell you that any two dimensional quadratic form represents one. So, because V has you invariant. So, in dimension two, you can find a vector V and V with V dot V equal to one. So, I'll leave that as a sort of as an exercise. If you have something of you invariant one you can always find a vector of this form. And then let V prime be some vector in your orthogonal complement. And then you can diagonalize. And then what you find is that this two dimensional vector space two dimensional quadratic space is isomorphic well you have a vector of length one. And then you have some orthogonal vector whose length is something, but just by looking at the discriminant that something has to be D times the square. So, okay so this is anyway going to be on the problem sets so. Maybe I should I should stop here, but this is this is how you how you prove this theorem so you can, you'll see you'll explore this more on the on the homework. So, I'll stick around for a little person for questions here. Yeah, any, any, any questions. We can just ask. I have a question. I was wondering why we call the hyperbolic plane and hyperbolic main is there some sort of geometric intuition behind that. Well, I guess it's like, right, I mean I guess if you write it in coordinates it's a quadratic form x, y. Yeah. I mean it's a quadratic form given by x, y, which I guess the level sets are hyperbolic. So I'm not, I'm not, I don't even I guess at least, but right, sorry so there's a question what pairs discriminant and dimension give rise to a quadratic form this and yeah so that's a that's a good that's a good question. Um, but in fact you can get any discriminant and any dimension. Because, sorry, maybe I should share again. So you can get any dimension and discriminant, because what we can always do is you can take one comma one comma dot dot one comma D, and then you can have it any dimension you want and then the discriminant is also what you want. Sorry, sorry, there's a question, why can't we rescale the basis vectors by square. Right sorry so if you rescale the basis vectors if you rescale. Right, sorry so if you have a if you have a if you have a quadratic form in diagonal form. So sorry I think I would say a little bit more about this next time but if you have a quadratic form in diagonal form, then the discriminant is it's a product of all the diagonal terms. And you can change the diagonal terms by squares as you're saying, but the discriminant is really only defined in f cross modulo f cross squared. I hope that helps. Yeah, so I will say more. I think I would say more tomorrow. I will try to get more into detail but when you know how you can, like, how, where do isomorphisms between quadratic forms come from. And really, I mean right so any quadratic form is, you know, you can you can write it down by some, some n tuple dimension and by some sort of n tuple of elements of your field and you can ask when or to such n tuples giving you diagonal forms isomorphic. In fact, you can sort of make that very, you can answer that very precisely if maybe slightly inexplicably. So I guess this chain equivalence theorem of that, which I will explain next time which turns out to be really useful for defining invariants because if you want to check that something is well defined you just have to check that it doesn't change after these sort of very, very particular moves that you're allowed to do. So I think there was a hand raised on this piece. So basically, I mean, if we look at that proposition that we just proved for every field of you invariant less than or equal to two in the case of the the finite fields where this FQ cross modulo FQ cross squared is just that mode to Z. Now that means that well for every dimension we have up to isomorphism preserving the quadratic form. We have basically two quadratic forms right in each dimension one for each well for each element. Okay. Yeah. Yeah, sorry, I should have said that. Yeah, maybe I will pick up on this a little bit more next time, but yeah. So a little bit simpler than over the real numbers for the real numbers you have this. I mean, you have a bunch of pluses and a bunch of minuses and that can, that can vary as long as the sum is n. But here you just have you just have one sign plus or minus one. So so you can also define the discriminant over the real numbers but it's not a strong enough invariant. It's only one to that over a finite field that's everything. Is QP a C1 field for some prime P okay great question. So yeah so it's not it's not going to be a C1 field. But right so okay we haven't defined QP yet I guess that's going to be Thursday's lecture probably. But so you cannot, I think the question to ask so QP is going to be something kind of like a Laurent series it's sort of supposed to be analogous to a Laurent series field over FP. So on the exercises, there'll be some things about Laurent series fields, which are supposed to be a warm up before we talk about QP. And the answer is that, so this was actually an open. So it's not going to be C1 because somehow you have, you have an extra parameter like even extra parameter that comes from. Well if you think about Laurent series you have an extra parameter coming from the order of vanishing. So it's known that these Laurent series fields over a finite field are C2 this this weaker thing that if you if you have a degree D polynomial in it in more than the squared variables and there's the zero. And so it was I guess it was an open question for a while if that's true for QP. And so it is true for quadratic form so so the unvariant of QP is going to be, it's going to be for so this is this is really one of the things that. So, right so this, you know, talked about today is a case of you invariant to essentially that you have you have one invariant given by the discriminant. And you know later in the course we're going to talk about you and essentially the case of you invariant for. And that for example is the case of QP. So so so so degree five quadratic forms have a zero. In general it's not true that it's not QP is not C2 I think this this turned out to be false. So if that's true, there's, there's a famous result called the the axe coconut theorem that if you, if you have, if you fix a degree D, and then you take P sufficiently large for that degree, then. You're going to have to deal with the order of quantifiers. Then any, any form in more than D squared plus one variables over QP has a has a zero has an entrepreneur zero. So, so there is something but yeah it's QP is C3, because as your variant for. Sorry, so it sort of you want it to be C2 I think, which would give you invariant for so so the maximize what is true is that the invariant is for that the. Oh, I understand how the CI naming war. So, right. You're very, and how long to do to my CI numbers. So in general you can't, because the you invariant is a forms of any degree. So, I'm sorry not the C, the C condition is for forms of any degree, whereas the you invariant is only for quadratic forms. And so quadratic forms are special in many ways, for example, diagonalized and so forth. I mean, the, if you, if you have a field that is CN. So I didn't, I didn't define CN, but basically then the you invariant is going to be bounded by two to the end. And they don't know. Yeah, I don't know whether these things are probably not the best possible in general. I'm not sure what this state of the art is here. But I mean, someone here knows it to feel free to. Step in. But yeah, so QP it's it's not it's not a C to field, but it sort of behaves that way for quadratic forms. So quadratic forms over over QP you can still sort of control them pretty well and you can classify them. So so here you're here over over a finite field. I mean you have this one I mean you have the dimension and then you this one invariant the discriminant. So QP you have this you have this additional invariant called I guess house of it invariant, which is in during that, I guess we'll see pretty soon. More questions. Yeah, so I guess I will try this. Oh, sorry, question. Yes. I can't hear you sorry. I can hear you now. Okay, so just to clarify the terminology so when we say quadratic form, we actually means a specific decomposition right or we already have a quadratic space and we there and then there is a decomposition. I guess when I say a quadratic space or quadratic I usually mean the just a vector space with a quadratic form I'm not choosing a basis. But I guess if you if I like if I write out a quadratic form and coordinates for example like a x squared plus b y squared plus c z squared then I guess I'm doing that kind of informally but I'm sort of implicitly referring to, you know the quadratic form, given by this particular basis in this vector space but I think when. So so basically when we say quadratic form that already means that there is a fixed basis and then I guess I wanted to understand the difference between quadratic space and the quadratic form when we say these two terms what's the difference of the these two terms. Well, I, yeah I guess I'm not necessarily going to make a very precise distinction between them I'm only going to. I think everything I'm going to say is going to be about quadratic spaces. And I think maybe I mean maybe sometimes people would use the use the usage as sort of what you said that the quadratic form refers to a specific function. You know I don't I don't I'm not talking about other people I'm just talking about in in our like in in your lecture, because the quadratic form also show up this this term also show up pretty often. And I just wanted to know when you say quadratic form then when you say quadratic form versus quadratic space then how do we distinguish what you mean by the two. I'm only really going to work with quadratic spaces. Yeah. So if I say quadratic for my, I guess I really mean quadratic space. Okay. Yeah, so I don't know if Dustin is going to have. I guess you could say that the quadratic form is the function on the vector space values in the scale. So it's part of the data of the quadratic space, the second part of the data, the first part is the vector space that terminology would make sense to me. Or can I can I think that if there's so quadratic space comes with a quadratic comes with a function the quadratic function right. And so, so if I fix the basis then I will be able to express precisely what their function is. And then also there's a decomposition. So. If you if you choose a basis you get to write down a polynomial, you know a degree to polynomial. Yeah. And if it's diagonal. So if it's just got x squared y squared z squared and so on and I guess it corresponds to a certain decomposition as well. But if it's got cross terms, like x y and such. So when it is over Q yesterday in the yesterday's lecture we know that over R then it's always. There is a one to the R and negative one to the S. So that's, it's always in this phone, but if it is, if it is for a different field, do I possibly have a non diagonal decomposition. I mean, if you have a quadratic form over the real I mean if you if you have, I mean when I when I said yesterday that you can, you know you can die you can put write a quadratic form as one to the S plus minus one to the R. I mean that's, that's saying that you can choose a basis such that the quadratic form looks like that it's, I mean if you start with a quadratic form and if you choose it, like an arbitrary random basis, then you'll probably have lots of cross terms. If you choose a random basis of a quadratic form over our quadratic space over the real numbers, then there's no reason that the vectors have to be orthogonal to each other, for example, but a statement is that you can choose well over any field. You can choose a basis such that there are no cross terms such that the basis vectors are all orthogonal to each other. So in terms of the real numbers, you can even choose them so that the self dot products of the, you know, the EI dot the eyes are either cost or minus one. Okay. Yeah. So I mean if if you want to think about it in terms of symmetric matrices. So like a symmetric matrix really gives you a function on F to the end, for example, then the symmetric matrix when you start out it could have lots of cross terms. But what we're we're studying is a symmetric matrices up to up to a suitable equivalence, and the equivalence is that you, you're allowed to say that a symmetric matrix is equivalent to a symmetric matrix a is equivalent to be transpose times a times B. And that comes from a change of basis of the, in the quadratic space. And then the statement is that you can up to this change of equivalence you can, you can put your quadratic you can put your symmetric matrix in this particular particular. Okay. All right, thanks. So I guess you could phrase everything in terms of symmetric matrices budget geometrically. Okay, thanks. Yeah. Yeah. So I have another question now that I that I was thinking about it. So basically we have for a way we have we want to define a quadratic form. Well, mapping into into some field. And I've seen well two definitions the one that I've seen which is probably closer to the one we're working on. We're working with in this course is that we we start with well an inner product some bilinear form symmetric bilinear form. And then you can just have the associated quadratic as Q of X is X dot X. But there's another definition I've seen which is kind of more pulling which is more polynomial related which says, well, it's just a degree, degree to homogeneous polynomial. And I was looking into well, an argument in a textbook about how to reconcile those two definitions. And in that argument we used again as many times in this whole study, the fact that the field doesn't have characteristic to because we divided by two at some point. How do you, is that is there a way to reconcile. How are things even defined in characteristic to because there's like pretty basic stuff that we're doing where we divide by two. How do you circumvent that when you want to discuss about an algebraic closure of F2 or something. Well, I think there are multiple I mean there are multiple objects you can study in characteristic to so for example, you could study symmetric bilinear forms which are non degenerate. And you could also study quadratic forms in the sense of you have a function Q like as you said a homogeneous degree to polynomial. And that gives rise to a symmetric bilinear form and you could say impose that symmetric bilinear form is non degenerate. And those are those are sort of different theories. And I think one can develop many of these results in characteristic to but I, I would really have to defer to someone else here on the state of that. So even when we talk about how well you have a quadratic form and it defines symmetric bilinear foreign advice versa, even in the polarization identity we divide by, we divide by two. So, well, sorry, but I guess what you can do is you can, you can say that you have a quadratic function. And then you can, well, you can do the polarization without dividing by two so q of x plus y minus q back minus q by, and then that gives you a symmetric bilinear form, and you can ask that that is non degenerate. Oh, I see, I think, yeah. So, the symmetric bilinear form in this case isn't going to determine the quadratic function. So, but so the quadratic function is going to be stronger data yeah so yeah. Right sorry so so I guess the quadratic function that you would consider in characteristic to is roughly it's one half of the construction that you would do in, you know, in characteristic away from to. But so this I mean this is, this is a perfectly sort of, you know, I think having people show you quite a bit that, that you have this function q, which you can think of I think people call as a quadratic refinement so it's something stronger than the symmetric bilinear form. So, I see, thank you. Yeah, but yeah sorry I think for for more about this I really do have to defer to someone with more characteristic to expertise here. In some of my lectures we're going to be doing quadratic forms over the integers, and then you need to be similarly sort of careful, and I suppose I will be when the time comes. I can't say that I'll right now be able to say exactly how it should go. Yeah, hopefully I'll have a straight by the time I give some lectures. Thank you. Well, I guess, for example, I mean you can have quadratic I mean you can have a, you can have like a symmetric bilinear form to actually yeah well. Yeah, so the different things you could. Okay, so you would have a symmetric bilinear form. Non-generous symmetric bilinear form such that x dotted with x is always zero. So, you mean a quadratic form. Yeah, well, or a symmetric bilinear form. Yeah, right. I mean, you can always have that. But you're saying it can have a non zero refinement to a quadratic form. Or what do you say. Yes. Yeah. Well, in particular, but I'm saying I can already have a symmetric bilinear form such that every element is orthogonal to every element is isotropic. I even a non degenerate. Yeah, okay, non-generate symmetric bilinear form. And I guess if you're working over the integers, then the quadratic refinement is sort of a congruence condition, right that x dotted with x is always even. But if I mean if you're over a field of characteristic to then you're you're making some choice. Making. Yeah. You have to choose how to divide by two. Choose. Yes. Any more questions or. Yeah. Okay, so I guess I will try to. Yes, I'll try to drop by the office hours and otherwise, I'll keep an eye on the, I guess, discord. So, otherwise, I will see you. See you tomorrow.