 to the last week here. So let me share my screen and you'll see that the title of today's lecture is quadratic forms over the integers. So we're gonna be starting a new topic today. But first, we have to finish up our discussion from last time. So we're gonna finish the proof of Hossam and Kowski and then we'll wipe the slate clean and kind of start afresh with a new sort of theme quadratic forms over Z. Well, so let me recall what we were in the middle of doing. So the statement of Hossam and Kowski is that if F is a quadratic form over Q such that F new is isotropic, crawl places new of Q, then F is isotropic. So if you've got a non-trivial zero in the p-addicts for all primes p and in the real numbers, then you've got a non-trivial zero in the rational numbers. And so we just saw that this was true for n, the number of variables less than or equal to three. And today, we're gonna do n equals four. And then finally, we'll have enough raw starting power to be able to do an induction allowing us to establish the case n greater than or equal to five. Okay. And I just, before we get going on this, I just wanna remind you of a small demo we had from last time that we didn't actually use last time because it was actually only supposed to be used in the n equals four case. So if you have A and B and Q for any new, any place new, if you have A, B and Q new cross, and also C and Q new cross, but playing a different role, then the form AX squared plus BY squared represents C, if and only if there's a certain equality of Hilbert symbols. So if and only if AB new is the same thing as C comma minus AB new. And the precise form of it doesn't matter. The important thing is just that it's some relation between Hilbert symbol evaluated on C and that on A and B. So we proved that last time. I won't go over it again. And now let me explain the case n equals four. So the proof of the hospital has to be in Kowski for n equals four. So we're gonna, as usual, write our quadratic form in diagonal form. So we'll take F to be AX squared plus BY squared plus CZ squared plus DW squared. Now, in the previous lecture, we said we can do a number of reductions. We can make A whatever we want, and then we can make BC and D square free integers, but none of that is gonna be important for this argument. So you could just have all of these be non-zero rational numbers. The further assumptions on them will not be relevant. So I won't use them. Okay. Now, what is the idea? The idea is we're gonna split it up into two. So actually, I'd rather change signs for the purposes of the argument. So I'm actually gonna write it as, I'm gonna change what C and D are. I'm actually gonna write it as a difference of two quadratic forms in two variables instead. And that's just, it's just gonna make the notation a little simpler. So for all new, we have this F new represents zero. So we can solve for this expression being equal to zero, which means we can solve for AX squared plus BY squared equal to CZ squared plus DW squared. And that tells us that we can find a scalar T new in Q new such that AX squared plus BY squared represents T new and so does CX squared plus DY squared. Well, I guess maybe I should use the same variables because the CZ squared plus DW squared also represents T new. So what this hypothesis that our four variable quadratic form represents gives us is that when we split it up into two, then the first half, we have a number which is represented by both the first half and the second half, so to speak. Now, let me make a small remark. We can assume that TV is non-zero. So TV is in Q new cross. Why is that? Well, suppose it were zero, then this form would represent zero. Therefore, it would be isotropic. Therefore, it would represent everything. There was this lemma we've had and used several times. So in fact, it would be the hyperbolic plane and the hyperbolic plane represents everything. And similarly, this one would represent everything. So then we could just choose any random non-zero scalar and it'll be represented by both. So we could just replace T new by that random scalar. So there would be no problem. So we can indeed assume that it's non-zero. So I could ask a question. So F sub new, that's just the original F when you take it. But why are you now saying that, so it's AX squared plus BY squared minus CZ squared minus DW squared that represents F new. So there's a bit of an abuse here. So when I say F new, I mean F, the same quadratic function, right? But viewed now as a function on Q new instead of just as a function on Q. So it's the same F you could say, but the statement F new represents zero means something different. Now it means that there are elements X, Y, Z and W and Q new such that the value is equal to zero. So, right, but you are omitting some of the summons in F new when you say that CX squared plus BY squared represents F new or A. Well, this, representing zero is the same thing as saying AX squared plus BY squared equals CZ squared plus DW squared. But that's not the FW, that's not FV. No. That's not F. No. So F is that four polynomial with four terms, right? Yes, this one right here. So F new also has four terms, but now you're just cutting F new and in half some. Yeah. So I haven't given names to these forms. Oh, are you calling the T? Is it some of these? No, no, T is a scalar, yeah? So T is some scalar that we know exists just because we can take the value on AX squared plus BY squared, call that T new. I mean, for our zero, we have the zero of F new. Okay, sorry. I thought that T new is F new. Ah, yes. Yes, yes, yes. So there's a reflection that takes one to the other on the piece of paper, but they are in the way I write them, but they are different, yeah. Could you make it clear please for somebody who walks in later? Ah, I see what you mean. It's not that I'm looking at it myself. Yeah, it does not look clear at all. Okay, you know what? Let me see what I can do with making my form a capital F instead. And I hope I remember, if I don't remember, please remind me. So our form is gonna be capital F. Thanks for pointing that out right now. Yeah, I didn't realize that looked so confusing. Yes. Right. So we can assume T new is in Q new cross. And now by the lemma that we just recalled here, this computation with Hilbert symbols, well, this is equivalent to saying that, you know, some of the quality of Hilbert symbols, T new minus AB equals AB, and this is all at new. And similarly here, T new minus CD equals C comma B new. Okay. Now, we're going to use weak approximation. We want to turn, want to replace the T new by the same T in Q cross same. So all these T new user depend on you a priori, right? We want to say that we can replace them by something which doesn't depend on you. And in fact, you know, it comes from Q and we'll do that using weak approximation in a certain clever way. So the statements from your problem sets about, that's exactly a way of going from a bunch of local data to some global data, some local scalars to some global scalars. But to apply weak approximation, you have to have a finite set of primes at which you're operating. So you somehow have to be able to control everything that's happening outside of that finite set of primes. So what is our finite set of primes going to be? It's going to be the least obvious. I mean, the primes for which the behavior, at which the behavior of the quadratic form is hardest to control. So let S be the set of places which is infinity and two, those are always tricky, right? Those are kind of exceptional cases in the theory of quadratic forms. The real number is in Q two. So we'll always stick those in. And then we union the set of primes P for which P divides either A or B or C or D. So we just look at all the primes that are going to cause us trouble, potentially. And we make that our finite set. All right, yes, other areas. And I mean, can we say that for all the other primes that we have it included in S, this four dimensional form that we started with is going to be isotropic in Q new, right? Yes. Yeah, that's not exactly, we're going to use that, we're going to use that observation, that kind of trick more in the n greater than or equal to five case. That's not actually going to help us with this precise, this precise argument here. We've transferred everything to being represented by quadratic forms in two variables. Yeah, but no, we'll actually, we'll use a form of that pretty soon. Okay, so now let's apply weak approximation from your problem set. And that tells you there exists a T in Q as close as you want to T new for all new in S. So meaning there's a rational number T and you can choose the rational number such that the distance between T and T new in the new absolute value is as small as you like. Or this is equivalent to saying that the distance between T over T new and one is as small as you like. Well, you have to fix how small you want it to be and then you can make it that small. And this means that you can guarantee that T over T new is a square because the first problem on the problem set is that if you're close enough to one in any of these local fields, then you're a square. So, and that's all we're going to take out of this is a T in Q cross such that at all the exceptional places T differs from T new multiplicatively by a square. And what does this tell us? Well, the Hilbert symbol recall is bimultiplicative and it takes values in plus or minus one. So if you replace anything in the Hilbert symbol by something which differs from it by a square then it doesn't change the value. So that means that T comma AB new is equal to T new comma AB, oh minus AB I should put new, which by hypothesis is equal to AB new. And that tells us that T is represented by, by AX, oops, plus BY squared. And similarly, also by CX squared plus BY squared. Okay, and this should be, sorry. And I should now I should be really careful because everything here is a rational number. So I mean in Q and U. So it's represented in Q and U by AX squared plus BY squared. Meaning you can find X and Y in Q and U such that T equals AX squared plus BY squared. So we've guaranteed that our number T is represented by AX squared plus BY squared in Q and U for these exceptional places in U, but what about all the other ones? We have to be able to control those as well. And for this, we actually need an extra hypothesis on T. So we're called a variant of weak approximation that says that we can also, this was also on the problem set. So we were allowed to guarantee that the new attic absolute value, well the P attic absolute value of T is less than or equal to one for all P that are not in S with possibly one exception, which we'll call P not. So you can make it P integral at all P except for one where you will have to, you have to have some kind of denominators. Okay, then for P not in S union P not, we have that T comma minus AB P is also equal to one because we're at an odd prime. All of our except, we put two as one of our exceptional primes. So we're at an odd prime and everything here is relatively prime to P. Oh, sorry, I shouldn't. Yeah, no, sorry, I should have actually said, I should have actually said, maybe I stated it wrong in the problem set. I wanna guarantee that it's a P attic unit, not just absolute value. I should have said that you can, I might have stated it wrong in the problem set. You can prove something stronger. Yeah, all right. You wanna actually guarantee that the P attic absolute value is equal to one so it's a P attic unit. But then the Hilbert symbol is equal to one, but that's also the same thing as the Hilbert symbol of AB P. So all of these are P attic units as well. So we have the same conclusion anyway. So T is also represented by AX squared plus BY squared in these QPs. So the only place left for which we don't know that T is represented by our guys is P not. But that's only one place. And by Hilbert's product formula, we must also have that T minus AB P not is equal to AB P not. Because Hilbert's product formula says that if you know, if you're taking a Hilbert symbols where the values are in Q and you know that, you know them for all but one place, then you know them for the remaining place because it's determined because it has to be equal to the product of the rest. And that's true for this side of the equation and that's also true for this side of the equation. So here's where we're using Hilbert's product formula or quadratic versicrosti in a kind of surprising way. So we see that also at the place P not, we have the same conclusion. So our grand conclusion is that T is represented by AX squared plus BY squared and CZ squared plus DW squared for all places new. So in representing Q new. But being represented by a two variable quadratic form is the same thing as saying that the three variable quadratic form you get by, you know, so AB minus T new is isotropic for all new and so is CD minus T new. And then by sort of doing a little bit of an induction here, so by the three variable case, we deduce that AB minus T is isotropic and so is CD minus T. Sorry, I'm fitting that in a very small corner there. Let me instead move on to the next page. So this implies that by the three variable case that AB minus T and CD minus T are isotropic over Q and that again tells us that you have X, Y, Z, W such that AX squared plus DY squared equals T and CX squared plus Z squared equals T. So that means these two things are equal to each other and you bring it over to the other side and you've solved your equation. So that's the four variable case. Okay, so now let's move on to the five or more variable case. So I'll just write it in five variables just for concreteness, but you'll see that the argument is inductive and it will work for any n greater than or equal to five. So let's just say n equals five. Then we have F equals, no, we'll use the same trick. Well, we'll start in a similar way. Actually, let me do A1 and B1 and C1. So we have AX squared plus DY squared minus A1 Z squared minus B1 W squared minus C1 V squared. And we're gonna do a slightly different argument here. So yeah, it's not gonna be obvious why I'm making this my finite set right now, but it'll come through in the argument. So we're gonna do another weak approximation, but it'll suffice to use the naive form of weak approximation in this case. It's supposed to, everything's supposed to become easier when you add more variables, right? That's this principle of the new, the U invariant that Achille talked about that, you know, that eventually over most fields, if you add enough variables then you're supposed to always have a zero. So, and in particular, I just wanna make a remark here. We saw that over a periodic field, as soon as you have five variables, you're guaranteed to have a zero. So our hypothesis is almost vacuous. Our hypothesis in Hasselman-Kauski that the form is isotropic at all places is almost vacuous. I say almost because at the real numbers, it's still saying something. It's saying that you're isotropic over the real numbers, but that's sort of all we should need to use. Now, in fact, in the argument, we're gonna use more than that, but this is just to show you that things are supposed, the argument is a little easier than the N equals four case. It's supposed to be a little bit easier to find zeros the more variables you have, and that makes sense. Okay, so we're gonna let S be the set of places where this A1, B1, C1 is not isotropic. And this is finite because for large enough primes, you know, for primes P not dividing two A1, B1, C1, you have a non-trivial zero in Fp by the mu invariant. Well, that was this counting argument that Achille gave that a quadratic form in three variables over Fp always has a non-trivial zero. And then you get one even in Zp by what's it called the Hensel's lemma. And therefore also in Qp. And this is where we're using the hypothesis that N is greater than or equal to five, by the way. It's that when you split it up into two variables and then the remainder of the variables that you have at least three variables left over so that you can make this, exactly this argument here that the set of places that you're looking at is finite. Okay, so now, so for mu, so to make a weak approximation argument, we need a scalar. So, yeah, just a sec, let me get my notes up. I'm gonna get a little lost otherwise in my argument here. Right, so we're again just gonna, we're gonna choose a random value, T need of AX squared plus BY squared. So let's say AX mu squared plus BY mu squared equals T mu for mu and S. And all of these, you know, and X mu and Y mu and T mu are all gonna be in Q mu. And again, we can assume, yeah, we can assume T mu is in, but we can assume they're all in Q and U cross actually. I don't wanna give the little argument for Y, but, oh no, wait, no, maybe I don't need that because no, I'm not using the, I'm using the naive form of weak approximation. No, maybe I do need it. Oh, sorry, guys, I'm a little confused right now. Let's just put it in, right. So now what we're gonna do is we're gonna apply weak approximation, not to T mu directly, but to the X news and the Y news, which is actually stronger because if we're close to X mu and Y mu, then we're also close to T mu just because of this algebraic expression and continuity of the algebraic operations. So we'll choose X, Y and Q cross close enough to X mu, Y mu so that, again, X is in X mu up to square, I mean, X is equal to X mu up to squares from U and same with Y. And this implies, as I said, by continuity that T is also in, well, we can choose it also close enough so that by continuity, we're implied that T is close enough to T mu and in particular, it's a square off from T mu. Okay. So now we already have that T is represented over, so that in Q and U, T is represented by rational numbers X and Y. So now we just have to control the other half of the expression. We also want T to be represented by the other variables that we left out. So, right, so, but so yeah, what am I trying to say? I didn't make it precise enough, I'm sorry. I don't wanna choose a random value of A X squared plus B Y squared. I wanna choose, I wanna choose it just like in the previous one so that this is equal to A1 Z mu squared plus B1 W mu squared plus C1 V mu squared. So I wanna choose rather a zero of my quadratic form over Q mu and let T mu be the thing that's equal to, both of the halves, since I put a minus sign on. So we're doing it just like in the previous one. I'm sorry for getting confused about this. So then the fact that T differs from T mu by a square implies that T is, so T mu is represented by the other half of my quadratic form. And since T differs by T mu by a square, then T is also represented. Or in other words, T comma A1 B1 C1 is isotropic for all mu and S. But it's also isotropic and here's where the way we chose S comes in. Also isotropic for all and mu not in S because well, even if you throw out the T part, it's still isotropic. That was the definition of S. So for mu not in S, even this part is isotropic so you have a non-trivial zero here. So you can just make the first variable equal zero and you find that, oh sorry, minus T. You find that this one is isotropic too. Okay. And now by induction, so or the N equals four case in this case, we get that minus T A1 B1 C1 is isotropic over Q. So that tells you that T is also represented by A1 Z squared plus B1 W squared plus C1 V squared. And since we argued already with weak approximation that T was represented by the first half of the quadratic form, then we get that, well, so we get with our X and Ys that AX squared plus BY squared can be made equal to A1 Z squared plus B1 W squared plus C1 V squared. And that gives us our global solution, our solution in the rational numbers. Sorry, what's the last part of the previous page? The last part of the previous page? Yeah, you just moved it, yeah. Yes. Before A1 B1 C1, what? Because A1 B1 C1 is isotropic. So since this is isotropic, there's a zero, a non-trivial zero here, well, then you just add a var, you've added a variable to that. You can just set that variable equal to zero and you get a non-trivial zero there. So it's- Is that over Q new or over Q new? Yeah, yes, yes, over Q new. Okay. Yep, I didn't write that, thank you, yeah. The, and it'll be too cramped if I write it wrong. Okay, all right. So that was the Hasselman-Kauski theorem. And now let's, again, wipe the slate clean. So, and start, I don't wanna say start the course over, but let's start a new topic, which is integral quadratic forms or quadratic forms over Z. So let me start by recalling the definition of a quadratic space over a field. So a quadratic space was always, was a pair of V comma F. So where this is a finite dimensional vector space and F was a function from the, so let's call the field K was a function from V to K such that first property is that it should be homogeneous of degree two. So if you take F of lambda V, the same as you can, you pull out a lambda squared. And the second property was that if you define the pairing of V and W to be F of V plus W minus F of V minus F of W, this should be linear each variable. That's kind of abstract, but it's equivalent to saying that if you write, if you choose a basis of the, and then write F in terms of the coefficients, let's call them x one dot the dot x n, then oops, I was, should I call it capital F? Maybe it doesn't matter anymore than F is a homogeneous polynomial of degree two. So this quadratic space thing, it's just kind of a coordinate free way of talking about homogeneous polynomials of degree two. And I'm gonna slightly prefer this. Well, we'll use both perspectives, but for concreteness, I recommend you just think of a homogeneous polynomial of degree two. So for example, you know, F two, well, if you're in two variables, then you just get something like this, A x squared plus B x y plus C y squared. So that's where two dimensional quadratic spaces, you know, once you choose coordinates, then you're just talking about such a, such a quadratic polynomial, a homogeneous quadratic polynomial. But the fact that we started with an abstract description tells us that we're gonna be mostly interested in studying quadratic forms up to change of variables. So we would consider this quadratic form to be equivalent to or isomorphic to any other quadratic form we get by a linear change of variables and invertible linear change of variables in the x and y. So this is the context we've been working with since the very start of the Kiehl's lectures. Now, what about over Z? Well, we can actually just do this same definition. The only real question is what the analog of a finite dimensional vector space is. So, and it's, so you know that every, every, you know, yeah, well, the correct analog of a finite dimensional vector space over Z is just a finite free Z module. So it's a pair M comma F where M is a Z module or an appealing group which has a basis. So which is isomorphic to Z to the Z direct sum N for some N. So we just explicitly require that our getting group is a free of getting group of finite rank. And then we think it's analogous to a finite dimensional vector space or the closest we can possibly come to a finite dimensional vector space. And F is as before. And again, there was nothing in the equivalence between these two perspectives which really used that it was a field. So it's the same thing as a homogeneous degree two polynomial with integer coefficients. So before it was coefficients ABC in the field. Now it's coefficients line the green of the integers. So so far it doesn't seem that different. But well, I'm gonna make a series of remarks which will lead to a picture that actually there are quite a few differences between the theory over Z and the theory over field. So first remark is that Achille had this standing convention that a quadratic space is always non-degenerate. So before we said our quadratic spaces or quadratic forms should be non-degenerate. Or in other words, this associated pairing so V cross V to the field or Z or whatever should be a perfect pairing. So it increases an isomorphism of, oh, sorry, maybe I'm switching my M's and yeah. We're gonna drop this assumption now. Why? Well, our favorite examples are degenerate. It's not satisfied in our favorite examples. There are just too few for our purposes. Yeah, non-degenerate quadratic forms. So for example, one of the most famous and well-studied quadratic forms is also one of the simplest to write down. f of x, y is x squared plus y squared. And there are a lot of, you know, there's some cool theorems in number theory about this. So for example, then these are the kinds of theorems we're gonna be talking about. An odd prime P is represented x squared plus y squared if and only if P is convergent to one mod four. So that's a very, very fun fact in number theory. And do you wanna be able, I mean, these are the kinds of theorems we're gonna be talking about right now. Questions about representing numbers and in particular prime numbers by quadratic forms. And to get interesting examples, you have to allow degenerate. They're not really degenerate because they're non-degenerate over the rational numbers. So they're really just, yeah. They're not perfect. I mean, the word for it, the number theorists use in this context is unimodular, I suppose. So the associated lattice is unimodular. It just doesn't happen often enough. Right. So that's one difference. We're dropping this non-degeneracy assumption. By the way, how, so we had this, so how do you check non-degeneracy? So recall over a field, non-degenerate was the same thing as saying that the determinant of the matrix or that the determinant of the bilinear form is invertible, is in K cross. So you know, in terms of a basis, you can write out, in terms of a basis for your finite dimensional vector space, you get a square matrix by just plugging in the different possible values of your bilinear form on the basis vectors. And if the determinant, if that's an invertible matrix, then your perfect pairing is non-degenerate and that's actually an if and only if. This is also true over Z. But so in other words, it's if and only if the determinant of the matrix you get. Now you don't wanna say it's non-zero. What you wanna say is that it's a unit. So, but now the difference between the integers in a field is that the integers, well, one of the differences between the integers in a field is that there are very few units in the ring of the integers. So this is just plus or minus one, right? So to get a non-degenerate one, you have to have this property that the determinant of the matrix you write down from the quadratic form actually had a determinant of plus or minus one, not non-zero, but plus or minus one. So for example, if you look at the matrix for fxy equals x squared plus y squared, you actually get I think 2200, no, no, sorry. Yeah, yeah, that's right, 2002. And that means the determinant is equal to four. So yeah, but this, the fact that the units are so small also gives us something nice. So recall also, so that this determinant depends on the basis of the vector space, but only up to squares, only up to squares of the units. So recall that when you do a change of basis on your quadratic form and isomorphism on your quadratic form, if M is the matrix for the quadratic form and A is the change of basis, then the matrix for the new guy is AM, A transpose. So when you take determinant, you see the square of the determinant of A multiplied by the determinant of M. So again, all of this just works in fact over an arbitrary commutative ring and in particular over the integers. So now, but now we learned that over Z, the determinant of our quadratic form is independent of the basis. And this is because if you take the square of any unit, you get one. So nothing's really changing. So that's nice. So we don't have to remember that the determinant is a coset and more it's really a determinant or discriminant. I don't know. There's some sign, right? But yeah, it's well-defined on the nose. So it's just an integer. Okay. So now, right. I'm going to now specialize to, there's a lot to say about the general theory of integer quadratic forms, but I think it's a nice idea to just specialize to the first non-trivial case. Cause you see, well, there's some special phenomena which help you, which is, which make the discussion easier, but you see also a lot of the general phenomena that appear in the general period. So we'll specialize to the first non-trivial case. So this is two variables, also known as binary quadratic forms, which is a weird word, but it must be some old fashioned thing that stuck. Binary, I guess means stands for two variables. I don't know. So, so right. And so again, that just means that in coordinates, we have FXY is equal to AX squared plus BXY plus CY squared and A and B and C are integers. And if you calculate the matrix for the associated bilinear pairing, I think I'm not going to go through this, but you get two A, two C, B, B. So you're always going to get two times on the diagonals and then it's always going to be a symmetric matrix. And in fact, you can go backwards. If you have a symmetric matrix where the diagonal entries are even, you can write down an associated quadratic form from which it comes. But anyway, I just want to remark right now that the determinant is equal to, well, some familiar quantity. Well, it's all right, minus the determinant is equal to B squared minus four AC. You're all familiar discriminant from high school, right? So let's call this the discriminant. So does this match your assigned convention to how to kill the, in this case, the discriminant should be minus the determinant. Do you remember? Oh, I think, yes, yes, right. Cause I want this, I mean, B squared minus four AC, that's gotta be the discriminant, right? And you don't want it to be minus the discriminant. That would be confusing. Yeah, so yes. Now, there are two, there's a fundamental dichotomy. Whoops. So, well, we're gonna throw out discriminant equals to zero. That's gonna be too degenerate for us. So, there's D positive and D negative. Oh, yes, right. D positive and D negative, which can be read off. I mean, the discriminant, you can, the discriminant is defined for an integer quadratic form. But if you, when you have an integer quadratic form, you get a quadratic form over an arbitrary field, right? Because any field you like, you can do this, these are integer coefficients. So they make sense in any commutative ring, hence in any field. So you could just view it there. So if you have any invariance for any quadratic field, quadratic form over any field, you get an invariant for quadratic forms over Z, right? And the discriminant, so in particular, and the discriminant doesn't change because it's just the same matrix. So we can always, well, we can look over the real numbers and we have a two variable quadratic form and we have the Sylvester's law of inertia, which says that it's isomorphism type is recorded by just which signs you have on the diagonal, right? So pluses or minuses. And the determinant greater than zero, that corresponds to the case where you have a plus and a minus, which means that this is indefinite. That's just the definition of indefinite, over the reals, you have pluses and minuses, both pluses and minuses. But it means that, well, yeah. F takes both positive and negative values. And if the discriminant is negative, this means this is the same thing as saying that you're, over the reals, you're definite. And that's the same thing as saying that F's values, I mean non-zero values are always either positive zero or greater than zero. Greater than zero, less than zero. So, and we can choose, in the second case, in this case here, we can arrange all values to be positive by changing F by a sign. So it just replaced F by minus F and we switched between these two cases. Now those F and minus F are not isomorphic, right? They can't be because they take different values, but it's clear that the study of one will easily reduce to the study of the other. So, so if we're gonna, so what I'm trying to say is we're gonna focus on this case. This is the simplest case. For a reason I'll explain in just a second. And we're also going to just change the sign of F to make it so-called positive definite instead of negative definite. So why is, so I'll give, I wanna give you a hint of the arithmetic significance of this dichotomy. We're not gonna prove this, but it'll be in the background for this distinction. So you can also read this off. So D is greater than zero. It turns out your orthogonal group of our quadratic form is a finite group. Whereas if D is less than zero, it's infinite. I guess I'm saying the same thing twice because I put the reverse implications too. Well, actually you'll see, you'll prove this on your problem set. In fact, a generalization of this to quadratic arbitrary number with an arbitrary number of variables. This is a little more subtle to prove, but you'll do it in one case at least. I want you to study a particular indefinite quadratic form and find its orthogonal group. It's a good exercise of the number theory. And this is basically the reason why we're ignoring the indefinite case because, oh shoot, did I do it backwards? Ah, sorry, sorry, sorry, sorry. Yes, in the definite case, it's finite. In the indefinite case, it's infinite. I'm sorry. Yeah, we're sticking to the definite case because it's much simpler to have a finite group of automorphisms. In fact, it's gonna be very, very small. You can eat, yeah. It won't be very big at all. It's not finite. It's also easy to determine. It's almost always just of order two or order four, I guess, I don't know. And yeah, right. Having an infinite group of automorphisms makes things a little more complicated. Okay, right. Ah, there's one more thing I want to mention, which is another change in conventions that takes place when you move to the integers. So this is a little obscure at first sight, but. I'm sorry. Can you define order, please? This is order? Or what's that O sub F? This is the orthogonal group of the quadratic form. I think this notation was used by Achille. I hope that, but yeah, what it means is it's the automorphisms of the quadratic form. So it's those linear change of variables which leave the form invariant, so which don't change the form. So for example, if we look at X squared plus Y squared, if we replace X by minus X, it doesn't change the quadratic form. So that would be in the orthogonal group that linear change of variables. If we switch X and Y, it doesn't change the quadratic form. But if we replace X comma Y by X plus Y comma X, that's also an invertible change of variables. That will change the quadratic form. That will not be in the orthogonal group. That will be an isomorphism between this quadratic form and another quadratic form. Thank you. You're welcome. So when we work over Z, it's a good idea to use a finer form of equivalence than isomorphism. So it's called strict isomorphism or strict equivalence. This will mean that they can be, that F and G are related by an invertible linear change of variables. Excuse me. An invertible linear change of variables. So far I've just said the definition of isomorphism, right? But now I'm gonna add another condition with determinant one. So you're kind of throwing away half the possible changes of variables. You want to require them to be determinant one. The reason for this is not obvious at first sight. Well, it certainly gives us more information. So a priori, it's a good idea to do it. Quadratic forms up to this equivalence is finer. So you're potentially seeing more, right? But it turns out that many of the results are much nicer to state when you use this finer notion of equivalence than if you use the coarse one. It comes, same as become much more uniform. And you can understand how to go from the fine one to the coarse one. And you can see, once we developed the theory, you can see exactly why what's causing the fact that you get more uniform results with this notion of equivalence versus the other notion of equivalence. Right. So I think, so this was just setting the stage and I think maybe I'll stop for now and we'll actually get to some results, some non-trivial theorems and results in the next lecture. Yeah, but maybe I should say, it's not, how would you do that? We had this nice abstract notion of a quadratic space. So this is a concrete thing with polynomials being related by changes of variables. You can make it into an abstract thing by saying that a quadratic space, by talking about an oriented quadratic space. So like a finite free Z module with a quadratic form on it and an orientation. So an orientation, you could define or an orientation on the real vector space you get from that free Z module of rank two anyway. So there is an abstract perspective on this too. I'm just gonna stick to the concrete one. Yes. So any questions? Maybe I should actually say one more thing because I didn't get quite as far as I thought I would. So there are a couple of problems on the problem set that, well, I guess you can still do them. Yeah, you can still do them. You just don't have the context for the discussion. So you doing them will then make you better appreciate the context which is coming in the next lecture. Let's put it that way. Yeah, so problems three and four, I thought I was gonna be giving a little bit of context for, but it doesn't make it any easier to solve it to have the context, but yeah. Okay, so fine. Yes, Sundar, exactly. A basis, a matrix belonging to GLNZ. Yeah, so an invertible matrix with integer coefficients. Invertible in the sense of matrices, you can find out. That's the same thing as having determinant plus or minus one. And now I've just excluded determinant minus one for the strict equivalence. Yes, Kinta. Quick question. Is there an explicit example of things that are two forms that are isomorphic but not strict or isomorphic but not strictly isomorphic? Yes, it's not so easy to say right off the top of my head because you do need to go to pretty high discriminant. I think the lowest discriminant for which it happens is minus 23. So we'll have better, it's a good question. It's not obvious. We'll have a better context for it after the next few lectures and then I'll be able to give an example. Thank you. You're welcome. Yes, Latharius. So over Z, if we consider the quadratic form of X square plus Y square, this binary quadratic form if we write the matrix and it's two times the identity if I'm not mistaken, which would mean that the determinant of that would be equal to four, which is not in Z star, which is not in plus minus one. So would that mean that this form is degenerate over Z? Well, I would hesitate to use the word degenerate but it means that it's not, but the world that people use when working over Z is unimodular. So it's not unimodular. I mean, it's not real, I wouldn't really call it degenerate because you're never gonna have like, because over the rational numbers, it's not degenerate. So it doesn't really feel degenerate. I mean, it doesn't, but it's not quite a perfect pairing. So I think one should come up with a new word. I mean, they unimodular, but then that only really works over Z. So I don't know. I don't think it's a good idea to call it degenerate, but what it means is that, it's not, you don't get nice morphism from the finite free Z module to its dual. You get something which differs from nice morphism. It's off by some finite amount in some sense. It's an injection with a finite co-kernel, you could say. I see, thank you. You're welcome. I had a question. Yes, please. Yeah. So is there a relation between the orthogonal group of the quadratic form over the integers and the unit group of the corresponding quadratic field? Because we see that there is a similar phenomenon happens with the units that, in the real quadratic fields, for instance, the units are generally infinite. Yes, indeed, there is. Imaginary quadratic fields, the units are finite. Indeed, it's, we're going to see, we're going to say that these co-ks are indeed very closely related. Yes. So just a spoiler alert. We're going to see that a source of quadratic forms is ideals in rings of integers in quadratic fields. So for every ideal in a ring of integers, we're going to introduce all these concepts. Yeah, but for every ideal in a ring of integers in a real quadratic field, you can write down a quadratic form. And if you have an isomorphism, and then if you have any unit, you can act by it and you get an automorphism. Yeah. Well, sorry, I mean, you have to exclude, in the real quadratic case, it's not as simple, which is another reason for excluding that. But in the imaginary quadratic case, yes, every unit gives an isomorphism of the corresponding quadratic space. And so you at least get an injection from the units in the ring of integers of the quadratic field to the orthogonal group of the quadratic space. Yeah. Well, no, maybe there is no subtlety in the real quadratic case. We'll see when we get there. Yeah. Excellent question, Zindara. Zindara asks, for quadratic forms over Q, we have a theorem like Hasselman-Kowski. Over Z, do we have such an analog? The answer is no. So, and in fact, one of the problems on your problem set is going to be, I'll give you two quadratic integer quadratic forms, and I'm gonna ask you to prove their isomorphic over ZP for all P and over R, but they're not isomorphic over Z. And this is, so we'll talk a bit more about this phenomenon in the next lecture. It's not an easy problem, I don't think. I'm curious to see how you guys attack it. Yes, so there it is. So just looking back at the Hasselman-Kowski theorem, wouldn't it follow from this theorem that if I have, because I think I've read this theorem somewhere and I just wanna see if I remember it correctly, that if I have a quadratic form over the rational numbers that has dimension at least five and is indefinite, then that would be isotropic, right? Yes, that's right. Because the indefinite gives me the isotropy in R by Sylvester, and the dimension five and above gives me the isotropy in QP by the UN variant. Correct. So then I go to global by Hasselman-Kowski. That's a nice corollary of the Hasselman-Kowski theorem. Thank you. You're welcome. Okay, sorry. What was the same amount of the corollary? All right. Well, Eleftherios, would you mind repeating yourself? Yeah, that's fine. So the corollary of the Hasselman-Kowski theorem would be this. So suppose I have a quadratic form over Q and I know that it is indefinite in the sense that, I mean, basically in the sense that it represents both positive and negative values. So similar to the usual sense in the real numbers. So I know it is indefinite, and its dimension is at least equal to five. It's five or above. And then it would follow from Hasselman-Kowski that this form is actually isotropic because the indefiniteness makes it isotropic over the reals and the dimension being five or above makes it isotropic over QP for every P by this U invariant equal to four. Therefore by Hasselman-Kowski, I go from local to global and it's isotropic over the rationals. Oh, okay. Thank you. Oh, you're welcome. This observation in fact proves, I think, Lagrangian standard theorem in every number. So we just take this positive number on the other side and we get using this observation. So Lagrangian is still the every positive number. Oh, really? Well, I didn't know that. Because, so let us take this four dimensional form, one, one, one, one, and then we take this N minus N. Yeah. So using He-Lefterio's argument, we get a rational solution for this. Yeah. But then will that prove Lagrangian's? It proves the form of Lagrangian's theorem, at least for where you allow rational numbers. But I don't know a priori, I think a priori, it's a different question. Representing N by sums of squares of rational numbers or by sums of squares of integers. It's probably worth thinking about, but yeah. At the moment, I don't see an argument for going from one to the other, but that's an interesting line of thought. Yes, He-Lefterio? In the beginning of the course, I remember that Professor Matthew started his first lecture presenting some theorems on sums of squares, I mean, mainly for integers, like there's, I mean, for like sums of two, three, and eventually four squares. Are we gonna be ending up proving some or all of those? Well, I don't, actually at the moment, I don't remember which theorems like he'll stated, but- Like for example, Lagrange's theorem for like every natural number, some of four squares. For now, we're gonna be sticking with two variables. So we won't be talking about Lagrange's theorem. Yeah, that's fair. I see, thanks. Okay, I might say we're gonna give us, we're gonna do study x squared plus y squared, right? At the very least, and then remind me at that point, and I'll tell you how to modify it to handle x squared plus y squared plus z squared plus w squared. I'm sorry, I guess he thought that Sarah, I mean, in the course in arithmetic book, he does deduce like Lagrange's theorem and these various theorems for some of the four squares and also three squares, but I guess, yeah, I'm forgetting a little bit. Yeah, there is some extra argument needed to go from a rational solution to an integral solution. But the argument exists, say? The argument exists. Interesting. So there's so much stuff in this Sarah book. I mean, oh, here's that, oh, appendix. Yeah, here we go. Oh, there's a lemma of Davenporting castles. Okay, all right. I see, okay, that's interesting. Well, people always say Sarah's a really good writer. I don't know. I always find it really difficult to read this book, but it's also jam-packed, you know? And it feels, some of the arguments feel kind of conflicted with their many references. But so, yeah, but there is this section in this book, so I just found it, so it's on paid, in the edition I have at least, it's on page 46. There's a lemma B attributed to Davenport castles that allows you to go from Q to Z in the case at hand. So I will refer you to the book with my sympathies for how difficult reading it actually is. Yeah, but I guess it looks like, yeah, it's something special to, there is this condition that you have to impose on the quadratic form. I mean, this condition that he puts in parentheses H and then under that condition, which applies to sums of three squares and I guess sums of four squares, or at least sums of three squares. It's, yeah, you can go from an integral, the rational solution to an integral solution. Oh, yeah, so it's still from, yeah. So for four squares, he reduces to three squares by some more, by some elementary, not eight considerations, well, anyway, okay. So yeah, there's some tricks, yeah. And then Asingar, your proposal does work out after some tricks, yeah. And again, everybody, please do feel free to correct me if I'm mispronouncing your name. I, a lot of, yeah, a lot of your names are unfamiliar to me, but I'm happy to learn. Any more questions? Okay, well then I'll see you in a couple hours or however, yeah, see you when the time comes. Good luck with the problem set and have fun the rest of the day.