 And well, I'll just, he can start. All right. Thank you, Akil. Thank you all. OK, thank you very much. Yeah, hi. I'm Akil Matthew. I will, I guess maybe I will put my email in the chat. So yeah, so I will be lecturing for the first half of this undergraduate summer school and then Dustin Clousen will will lecture for the second half. So yeah, I think the I mean, maybe just a few words about the format. I think in general, I will try to end a few minutes before the hour. I mean, so you're encouraged to ask questions in general, but I try to hang try to end a few minutes earlier towards the end. So, you know, so if you want to sort of chat informally and also try to also try to drop in at the office hours and in Sokoko and so forth. Um, so yeah, I think also I will. So there will be some sort of exercises and these are these are supposed to be I guess sort of this does things to to think about to explore further. And so so I think these will also be posted, probably in in Sokoko as well. And they're yeah, I mean, they're they're they're supposed to be sort of food for thought and things to think about and maybe to discuss at the TA sessions. Also, I want to encourage everyone to to go to the TA sessions. And yeah, I think that's about all I have to say for for the introduction. And yeah, look forward to to to meeting many of you. You know, throughout these three weeks. OK, so maybe I will start with the math. So share the screen. So OK, yeah, so hopefully everyone can can see this. Yeah, so so essentially this is this is going to be a I mean, this is going to be a mini course around the the algebra and the arithmetic of quadratic forms. So we're going to cover some some different aspects, some of which are more sort of general and field theoretic and some, especially the second half of the course, will be more specific to the rational numbers and about sort of connections and number theory. But yeah, so I just want to start so today's lecture to be sort of an introduction to to quadratic forms. And I thought I would start by by mentioning some sort of classical results that maybe sort of started this area to quadratic forms. I mean, it's about somehow it's about sums of squares. And so, for example, there's this following is a very classical result, so theorem of Lagrange. So maybe I will make this a little smart. And so it states that every any positive integer is a sum of four squares. So sorry, so this is about squares in the integers. So we're going to be talking about squares and in general fields. But but but I think maybe the started with squares in the integers. And then there's a theorem of Legendre, which states when when an integer is a sum of three squares, and it states the following. So a positive integer. And so let me let me first assume it's not divisible by four, not divisible by four as a sum of three squares. If and only if it's not congruent to minus one, one, eight minus one or seven. Modulo eight. And then there's the theorem of Fermat, which is about sums of two squares. And this states that a positive integer is a sum of two squares. If and only if any prime factor that. Well, so let me write this way. So any prime factor, which is congruent to three, four occurs with even multiplicity. So there's sort of a complete classification of. You know, sorry. So I guess I should say in Legendre's theorem, if you have something which is divisible by four and you want to know if it's a sum of three squares, then first you just divide it by as many factors four. Well, you write it as a power of four times something which is not divisible by four. Then you apply this criteria into the part that's not divisible by four. So these three theorems just sort of a complete classification of when you can write an integer as a sum of, well, however many number of squares that you want. Yeah, so these are sort of three classical results in elementary number theory. And I also want to mention another result sort of in this vein, which is not over the integers, but involving rational functions. So this is a theorem. This is a theorem of Arton and actually it was a Hilbert problem. And it states the following. So. So let F be a rational function. So rational function over the real numbers in many variables, let's say an N variables. And then the theorem is that. F is a sum of squares. Of rational functions. So it's a sum of squares in the field. Let's say our rational functions and and variables. If and only if it takes non-negative values wherever it's defined. So, right. And so in fact, you can sort of make this quantitative. So it's a further refinement, which is due to Fister, that you only need two to the N squares. It's not actually known if this is best possible, but you can always write something as a sum of two to the N squares. So, yeah, so all of these results, I mean, somehow they're saying that, you know, you want to write something as a sum of squares. And then there are various sort of obstructions that you might, you know, you might naturally try to write down. So for example, in Legendre's theorem, it's basically some congruence condition that you can see if you have a sum of three squares, then, you know, this congruence can't happen. And or if you have a sum of some of squares over the rational numbers over the real numbers, it has to be non-negative. And basically these results are saying that those are somehow the only only obstructions. OK, so these. Yeah, sorry question. Does that say two to the N squares needed? Yeah, so. Well, let me let me say it. Yes, or that was in the best way of saying it. So F is a sum of, in fact, two to the N squares. It's actually an open problem. What is the best, you know, what is the best what is the best balance here? OK, so these are just some sort of general results in, you know, sort of in this area. But now we want to sort of be more systematic about what a quadratic form is and what are the types of questions we want to ask. OK, so so let F be a field. And I guess throughout, I'm going to assume that the characteristic is not two. And basically what we're going to be considering is the following. So let me call this a pre-definition. So a quadratic form over F. Is a function Q, which goes from F to the N to F, which is of the form Q of, well, let's say some vector X is a sum of some matrix coefficients. A I J times X I X J. So for example, it could just be X one squared plus dot dot dot plus X squared. I mean, that was the kind of thing that we were thinking about just a minute ago. So this is really a pre-definition. So this is basically what a quadratic form supposed to be. But I'm calling this a pre-definition because we don't. So this definition is not really taking into account the following sort of basic phenomena, which is that we can change variables, you know, so we can make some linear changing variables over F to the N. And that will give us a different function, but we should sort of think of that as being the same quadratic form and that's sort of encoding the same information. So now let me make a sort of more formal definition. So definition. So let V be a finite dimensional vector space over F. So a symmetric bilinear form on V is a function V from V cross V into F, such as the following conditions are satisfied. So first of all, B is bilinear, meaning it's linear in each variable if you fix the other variable. So IE, B of an argument comma V and B comma the argument are linear for each factory. So if you fix one variable, it's linear in the other. And B is symmetric, so B of B comma W is B of W comma. So this is something like an inner product, for example, an inner product on an R to the N, the usual inner product on R to the N. And right, so we're gonna be, what we're gonna be interested in is not sort of arbitrary symmetric bilinear forms, but ones which are non-degenerative, sorry. So maybe I should actually turn it this way. This is better, sorry. Okay, so we'll say that B is non-degenerate or an inner product if for all vectors V and V such that V is not zero, then there exists another vector W such that B of V comma W is not zero. So equivalently, this means that, well, so in general, if you have a bilinear form, it gives you a map from V to V dual, and then the condition is that this map should be an isoproposite. And so in general, we're gonna say that an inner product space is a vector space with an inner product. And just as a matter of sort of, just as a matter of notation, I'm gonna often denote the inner product like the dot product. So when we're sort of writing it as B as a bilinear form, but also I'll just use the standard dot product notation. Okay, so this is the notion of an inner product space. So again, non-degenerate symmetric bilinear form. And so what does an inner product space look like? Well, let's say we choose a basis of a vector space. So let E1 through EN be a basis. For the vector space V. And so if we have an inner product, so then we obtain an N by N symmetric matrix, which I'll write as AIJ, and this is given as EI dotted with EJ. And so conversely, the symmetric matrix is gonna determine the inner product by bilinearity. And also, right, so this is an N by N symmetric matrix, but it has one other condition, which is that it has to be non-singular, has to have non-zero determinant because we're assuming that it's an inner product space, that it's bilinear. So non-degeneracy gives that this matrix AIJ is non-singular. And so the conclusion of this example is that if you fix a basis of your vector space, then to give it the structure of an inner product space is exactly the same as giving an N by N non-singular symmetric matrix. So for a fixed basis, an inner product is just the same thing as an N by N non-singular symmetric matrix. Okay, so right. So I started by talking about quadratic forms and then I switched to talking about inner product spaces. So what's the connection? So if we're given an inner product space, so then we can also define a function. So let's call it V and let's give it the dot part notation. So then what we can do is we can define a function Q of X or Q from V to the ground field, ground field F by Q of little V is V dot V. So this is called the associated quadratic form. And it's called, right. So this is called a quadratic form. And basically, if you write everything out in coordinates, this is gonna have exactly the same form that was written down earlier. So we said that a quadratic form is something like this or here it's some function of this form. And that's exactly the form that this Q function is gonna have. So it has the form in a basis. So this function Q actually determines B. So I'm not calling it B now, I'm calling it a dot product. So note that the inner product is determined by Q because X dotted with Y is one half of Q of X plus Y minus Q of X minus Q of Y. So you have the polarization. You can recover the product from the quadratic form. So I think I'm going to interchangeably use the words quadratic space and inner product space because these are equivalent data. So this is our definition. And now let's ask some sort of basic questions about a quadratic space or about an inner product space. So what are some of the basic questions that we can ask if we're given one of these data? So the first question that we can ask is when are two inner product spaces or quadratic spaces isomorphic? This is gonna be one of the basic questions where it could be interested in. And so what is an isomorphism here? An isomorphism means, so an isomorphism between quadratic spaces is a linear isomorphism which preserves the quadratic form. So which preserves the dot product. So we want some tools for being able to figure out whether if someone hands you two quadratic spaces how can we tell whether they're isomorphic? The next question is, so if we're given a quadratic space when is there a vector of zero length? So given V, so when is there a vector which has length zero or meaning V dot V is equal to zero? Well, I guess I should say V is not zero. V is not the zero vector, but it has length zero. V dot V is equal to zero. So such a vector is called an isotropic vector and such a inner product space is called isotropic. So, and so in fact, one of the basic questions in this theory that you wanna ask is when is a quadratic space isotropic? When can you find a non-zero vector of zero length? Okay, and the third question is which elements do occur as length? So maybe I should say length squared. So which elements of the base field F, of the base field F, so which elements occur as V dot V for various vectors V not equal to zero. And so this is some sort of generalization of, if you're, this is some sort of generalization, this in particular includes the question of like when is an element of your field a sum of n many squares because your quadratic form could be x one squared plus dot dot plus x n squared. So one of the goals of this mini course is that we wanna answer all of these questions when F is the rational numbers. And that's given by the Hasselman-Kowski theory which is gonna be one of the results that's proved in this course. Right, so maybe I should say something about why, why we're interested in quadratic forms in particular and not say qubit forms or quadratic forms, why quadratic forms are special. And in fact, these questions, all of these questions are much more difficult for higher degree forms. So quadratic forms are, there are somehow special in that there is a very rich theory of them that you can actually answer all of these questions in many cases, but also you get interesting answers. Whereas it's much harder to say sort of interesting things and interesting sort of very general things in higher degrees. So what is sort of special about quadratic forms that leads to this theory? I mean, I think there are probably there are many reasons for this, but at least two particularly salient reasons which lead into the next topics in this course. There's sort of special to quadratic forms. Well, first is that quadratic forms have tons of symmetry. I mean, so this is the theory of the orthogonal group whereas higher degree forms generally don't have as much symmetry. And so for example, as we'll see quadratic forms, there's only really one, there's only one isomorphism class of quadratic forms if you're over an algebraically closed field and then there are lots of symmetries. So this sort of connects like the theory of quadratic forms over a general field to somehow the arithmetic of the field be sort of the size of its Galois group, so forth. Another reason is, well, sort of related reason which the next thing I'm gonna talk about is that there's this, well, quadratic forms are same as these inner products and inner products give you a good notion of orthogonality. So you have this notion of orthogonality that you can really sort of run with. So I wanna start by talking about orthogonality and then we'll come back to why quadratic forms have lots of symmetries. So as usual, let's fix an inner product space over the base field F and right. So if I have vectors V and W of my quadratic space V, they're said to be orthogonal if V dot with W is equal to zero. And so given a subspace, so given a linear subspace W sitting inside V, define the orthogonal complement W perp to be all those vectors V and V such that V dot W is equal to zero for all W in W. So all those vectors which are orthogonal, all those vectors of V which are orthogonal to every element of W. So this is the orthogonal complement of W. And right, so what does this do? Well, you always have the basic dimension formula. So the dimension of W plus the dimension of W perp is equal to the dimension of V. That's by the non-degeneracy because we're working with a non-degenerate inner product space. So just one caveat, I mean, if you're sort of thinking geometrically, thinking about the Euclidean inner product, the usual inner product and R to the N, well, then W perp, W plus W perp is equal to V if you're in the Euclidean setting, but that's generally not true here. So for example, you can have vectors that are orthogonal to themselves. That's not ruled out. So note that W intersect W perp is not zero in general. But so if the inner product on V restricts to an inner product on the subspace W, so it's a bilinear form, so you can restrict the bilinear form and then saying that it's an inner product is saying that it's non-degenerate. So the risk if it's non-degenerate on V, then in fact, W direct sum W perp, well, then W and W perp are orthogonal, or sorry, they're disjoint, they're linearly independent and their direct sum is given by V. So just keep in mind that you need this extra condition. And in fact, this gives you a nice way of sort of breaking down the quadratic form or breaking down the inner product. So if you have an inner product on V and if you choose a subspace W such that your inner product is still an inner product, it's non-degenerate, then in fact, it also restricts to an inner product on the complement and you get sort of a decomposition of your inner product space into W and W perp and each of W and W perp is an inner product space itself. So this gives you a way of sort of breaking down an inner product space into smaller pieces. And so in fact, we can just sort of formulate this as a general construction. So if we have quadratic spaces, so if we have V1 and Q1, so just for simplicity, let me write this time, let me write that quadratic form instead of the inner product so you could say this in terms of the inner product as well. So if V1 comma Q1 and V2 comma Q2 are quadratic spaces, so then what you can always do is you can form the direct sum. You can form the orthogonal, the orthogonal direct sum, V1 direct sum V2 and the quadratic form is just Q1 plus Q2. And so this forms the direct sum of quadratic spaces or inner product spaces such that V1 and V2 sit inside as subspaces and as orthogonal subspaces. So there's sort of, I guess this is like an external versus an internal direct sum. And so this gives an operation on if you have two quadratic spaces, you can form their external direct sum and you add the quadratic forms or you add the inner products in such way that they're orthogonal. And so this gives you a way of building up new quadratic spaces or inner product spaces from old ones. Okay, so I guess you could say this is the external direct sum. Okay, so this turns out to be a really useful procedure. The fact that you can break up a quadratic space into or you can build up quadratic spaces as these orthogonal direct sums of smaller subspaces. And it's particularly useful because in fact, it's sort of a basic observation that you can always break up any quadratic space into one dimensional subspaces. So in fact, the building blocks of quadratic spaces, if you use this procedure from building quadratic spaces from new ones, then you can actually build everything from the one dimensional ones. And this is what this is called diagonalization. So just notation. So given an element A in F cross, we're gonna write angle brackets A as the one dimensional quadratic form. Or the one dimensional inner product space. Sorry, my handwriting. Let's say K, sorry, F times a vector E1. And let's say E1.E1 is given by A. So it's A times X squared. In other words, it's a quadratic space. Right, and so what is this phenomenon of diagonalization? It's saying that any quadratic form or any, let's say any inner product space is a direct sum of one dimensional spaces brackets A for various elements A of F cross. So you can always break any inner product space into a direct sum of one dimensional spaces, sorry. I have a question. So why do you write E1 rather than just a random vector name? Oh, well, I was just choosing a name for the basis element. Oh, they have to be basis elements? Well, sorry, it's a one dimensional space. So I'm just choosing, it's a one dimensional space with some vector whose length is, yeah. Good, thank you. Yeah, thanks. Yes. Yeah, sorry, so question in the chat. Yep, so yeah, F times E1. Yeah, maybe that wasn't the best notation. I just mean it's a one dimensional vector space and it has a vector whose product with itself is equal to A. So sorry, maybe if that was confusing, maybe I should just say it's a quote unquote, it's a quadratic form AX squared. Yeah, sorry, please ask questions. Please just ask the question because it'll be, I may not see it in real time. Yeah, thanks. Okay, so it's a basic fact in this theory that you can always write any inner product space as a direct sum of one dimensional spaces, I could say. So another way of saying that is that if you have an inner product space, so this is called diagonalization, right, why is this called diagonalization? So can choose a basis such that, well, if you choose a basis then an inner product is given by a symmetric matrix, a non-singular symmetric matrix, and you can choose a basis such that that symmetric matrix has diagonal form, such that the symmetric matrix of the inner product is diagonal. So where does the proof of this fact? Well, the proof is that you start, it's a proof by induction. So yeah, so let's give this a name, let's call this V and that product, which is that you choose, there's always at least some vector V sitting inside V, such that V dot V is not zero. And since V dot V is not zero, well, then what you can always do is you can say that V, the big vector space V is isomorphic to the scalar multiples of the vector V direct some, it's orthogonal complement. Well, that's because V dot V is not zero. So you couldn't do this if you had a vector which was isotropic. And so you get a decomposition of the inner product space and then you continue by induction. So here you've written your inner product space as something one dimensional plus something n minus one dimensional and then you continue by induction. Okay, so this is the process of diagonalization of quadratic forms. So yeah, so in particular, it's saying any quadratic form, so in any quadratic form is specified by some symmetric matrix, but if you have a quadratic form, you can always write it as a diagonal form. So you can always write any quadratic form as, right. So you can always write this as isomorphic to brackets A one plus brackets A two plus dot dot dot plus brackets A n for some AI that belong to F cross by this diagonalization theorem. But there's definitely no uniqueness in how you can do this. And so I guess the question is a lot of the question is if you have two such quadratic forms, you can always put it in a diagonal form, but it's not obvious that it's not so obvious if you have two things in diagonal form when they're actually isomorphic. And if there's a nice way to see that they're isomorphic. Akhil, a quick question. Does the fact that F has characteristic not to play a role? Yes, thanks. So I'm assuming that, right. So I'm assuming the field has characteristic not to because if you're in characteristic to, sorry, so let me go back to earlier. So if you're in characteristic to, so there's a theory of quadratic forms in characteristic to, but they're sort of different, well, they're at least different, they're different, they're multiple theories of quadratic forms in characteristic to, because in characteristic not to the inner product in the quadratic form of the same data because of polarization, but polarization involves dividing by two. So if you're in characteristic to the quadratic form and the bilinear form are not the same thing. But you still have diagonalization is what you said or not? I think you do not still have diagonalization in general. Yeah. Well, maybe it depends on, might depend on which type of, no, I think you do not have diagonalization in general. Let me get back to you on that, sorry. Sure. I'll go back to that question, but yeah. Okay. Yeah, so I guess I'm mostly gonna work in anyway. Okay, so these are not unique, right? So what's the simplest way in which they're not unique? Well, you can always, I mean, you can always rescale your basis. I mean, so the simplest way in which this is not unique is that you can rescale your basis. And that means you, so that means that you rescale these AIs by some squares in your field. So let me just say that right. So these are not unique. So for example, brackets A1, so sorry, so I'm just gonna, instead of writing as a direct sum, I'm just gonna write brackets A1 through brackets AN. This is also isomorphic to brackets A1, say U1 squared times brackets A2 U2 squared, dot dot dot brackets AN, UN squared for any U1 through UN that belong to F cross. You can always rescale by squares. So that's already one way in which these are not unique, but already that tells us something. So for example, we have the following corollary. So if every element of F is a square, so for example, if F is algebraically closed, right. So then any quadratic form, quadratic form is isomorphic to brackets one, dot dot dot brackets one. So in other words, we'll informally the quadratic form X1 squared plus dot dot dot plus XN squared. Yeah, so if every element is a square, then you can always put it in. Yes, thank you. Yes, I think, sorry. So the statement is that X squared plus XY plus Y squared is not diagonalizable characteristic two, right. Yes, sorry. So I think neither symmetric forms or nor quadratic forms necessarily have to be diagonal as I call characteristic two. But yeah, let me come back to that or we can chat about it after the lecture. Okay, so another corollary of this because you can always rescale anything by squares is that any quadratic form over the real numbers is isomorphic to brackets one to the R, direct sum brackets minus one to the S for some R comma S. And in fact, there's a theorem which guesses on the exercises of the theorem of Sylvester is that R and S are uniquely determined by the quadratic form. So they're isomorphism in period. And in general, so one defines R minus S to be this one calls us the signature of the quadratic form. And basically the invariance, if you're a quadratic form over the real numbers, the invariance of the quadratic form are given by the dimension and the signature. So if you're over the complex numbers over any quadratically close fields at any element of the square, then the only invariant of a quadratic form is a dimension. But if you're over the real numbers, you have two invariance, you have the dimension and you have the signature to divide the number of plus ones and the number of minus ones. So I have just a few minutes remaining today, but I wanna say something about the orthogonal group about symmetries of quadratic forms. So definition. So given a quadratic space V comma Q, one defines the orthogonal group. So O of V comma Q to be all those linear isomorphisms from V to itself, such that Q composed with F is equal to Q. So in other words, all linear isomorphisms which preserve the product. So I guess you didn't think that it is the automorphism group of the pair of V and the associated quadratic form. It's called the orthogonal group. And right, so as I mentioned, the basic feature of quadratic forms is that they have tons of symmetries. So this orthogonal group is somehow quite large. And there's a basic way in which you can construct lots of elements of the orthogonal group. So there's a following example of lots of elements in the orthogonal group. Namely reflections. So given a vector little V in the quadratic space or inner product space V, such that V dot V is not equal to zero, then you can define a map R sub V from V to V by the formula that R sub V of X is given by X minus two times X dot V divided by V dot V times the vector V. So this is called the reflection through the vector V and it has the property that R sub V, well, if you restrict it to, yes, sorry, let me just go back, yeah. So if you restrict it to the linear space span by V, then it's given by minus one. So it scales the vector V by minus one, but if you look at the orthogonal complement to V, then it acts as the identity. So the reflection, well, as in usual reflections in Euclidean space, it's minus one on the vector V and it's the identity on the orthogonal complement. So this already gives you a large supply of elements of the orthogonal group O of V comma Q. And in fact, it's a theorem of Carton-Dudonnet, so I think on the exercises, this group O of V comma Q is actually generated by these reflections, okay. Okay, so I think this is probably a good place to stop for today. So I'll hang around a little bit to take questions and chat informally, maybe you can discuss characteristic two situation. And otherwise, I will try to post the notes from today and the slides and I know there'll be some exercises. So otherwise hope to see you all here tomorrow. Where will the notes be posted? So I will try to post them in SoCoCo and I don't know actually if there's, if you have any suggestions on that? No, I think that's a great idea. Okay. And then the other thing is I will head over to my office right now, please stop by to say hello before you log off for the day. So I'll see you there, take care, bye. So is it better, sorry, is it better? So then should I also go to SoCoco or should I hang around here? You may have, the questions might be about your lecture so you may want to keep these notes. Okay, so then I'll stick around here. So you stay here for a while and yeah, all right. But I will go to my office, bye. Okay, bye-bye. I'll stay here as well with you Akim. Okay. If you unshare your screen then. Yes, I can unshare my screen. Preferably at least we can all kind of see each other. Yes. I have a question about something that you said. Earlier you told that you had this quadratic form and you were saying that it is isomorphic to a space and I was a little bit confused about what you meant by that because a quadratic form is not a space, right? Yeah, maybe it was, sorry, it was being all informal. I guess when I say a quadratic form, I mean the quadratic space. I mean the vector space together with either the function Q on it or equivalently the function, the inner product B on it. So sorry, maybe I should just create, oh, I can share my screen. So when I say the quadratic form like AX squared plus BY squared, I mean, then you should think about the inner product space brackets A, brackets B because if you think about what the function Q is doing in coordinates, it's exactly AX squared plus BY squared. All right, thank you. Yeah, thanks. Hello, can you see me? Yes. You're in the Zoom meeting. Oh, sorry. Okay, I think she was trying to go to her office. Yeah, maybe I kill the form. This characteristic two thing is always bothersome, but if you look at the bilinear form, whose matrix is given by the two by two matrix, zero, one, one, zero. So that would be, if you translated that directly to a quadratic form, it would be two X, Y. But that's certainly not diagonalizable over Z two. So maybe that was, I mean, I think someone put in the chat X squared plus XY plus Y squared or something. But again, you can't write a matrix for that without having a half. Right. So. Yeah, thank you. That's a great example. Yeah, so in particular, if you have a symmetric, if you have a non-degenerate symmetric bilinear form, then it doesn't, in characteristic two, it does not have to have any elements of non-zero length. So V dot V can always be zero if you're in characteristic two. Right, so actually like when you look at, say, quadratic or symmetric, so I guess people study like symmetric bilinear forms over the integers, have symmetric bilinear forms over the integers, which are non-singulars or non-degenerate over the integers, so determinant one or minus one, but such that X dot X is always even. Right. Right. Yeah, so you could say X dot X is even positive in that case, it's always even positive integer. Is that something that comes up later in this course? Forms over the integers or more just over the- So I think only binary forms, so I think in Dustin's lecture, he's gonna talk about binary forms over the integers, but yeah, but I think not specifically, right? So not like this unimodular, like this unimodular even lattice is in higher degrees. Yeah. Great. Yeah, sorry, thanks for that. Yeah, I guess, yeah, also right, so in characteristic two, one also has the definition of a quadratic form that's different from the definition of a symmetric, non-degenerate symmetric bilinear form in that you specify the function Q such that this Q of X plus Y minus Q of X minus Q of Y is a non-degenerate symmetric bilinear form. But yeah, thanks for raising that. So more questions or comments? I think we should thank Akil. Very nice first lecture. Okay, thank you. I will be in my office later today after, is there a problem session today or not, Akil, or the TAs? Anyone? Maybe I can say something on that. So hi, I'm Freddie, I'm one of the TAs. The other two are at GSS things this week, so they can't make it, but or at least they can't make it to this. They will be at the problem session. So that will be in mountain time, it's 3.30 to five, I believe that's correct. You should all have a Zoom link for that, but if you want to meet in SoCoCo, we'll also put it in the auditorium like they did for this meeting. And so we can meet on Zoom slash SoCoCo and we can organize into smaller groups as we see fit from there. So I do hope to see you all later today and this week. I have one question probably silly. Early on in the lecture, you mentioned that we assume for our inner product that for any vector V, there's a vector W such that the inner product of V and W is non-zero, right? And then later you include the possibility that we may have vectors that have a norm zero, right? My immediate thought is how is that possible by Cauchy-Schwarz, right? Because by Cauchy-Schwarz you should have, if you have some vector W such that the inner product being non-zero, then by Cauchy-Schwarz, V and W should have norm non-zero. Does this have something to do with the possibility that we're not working characteristic zero or something like that? Is that why? Well, we could be in characteristic zero. I mean, I guess the issue is that we're not working with the usual, but we're not working with the real numbers. We're not working with the usual inner product over the real numbers. I mean, so for example the, well, right? So for instance, the example that that Paul just suggested, so right? So this is an example that's going to come up a lot. So, right? So the example of, maybe I should share my screen again. So the example of zero, one, one, zero. So this is called the hyperbolic plane. And if you look at this symmetric bilinear form or quadratic form, it's, I mean, even with the real numbers, I mean, it's not positive definite. It's given by, I mean, it's given by, it's spanned by vectors E1, E2 such that E1.E2 is equal to one, but E1.E1 is equal to zero and E2.E2 is equal to zero. So if you were going to diagonalize it, you could diagonalize this into the form one minus one, zero, zero. So this is like the form x squared minus y squared. And so, yeah, so naturally you're going to have lots of elements of norm of length squared equal to zero with this. Wait, looking at the bottom right entry, is it not the case that we're assuming that every vector has like non-negative norm? Is that not, are we not assuming that? No, we're not assuming that. Oh, okay, that makes sense. Okay, sorry. So we're working over an arbitrary field in particular, so yeah. Oh, okay. Right. Yeah. Yeah, so I guess we will talk about, you know, at least in the problem sense, we will talk about like the interaction. So in general, the theory of quadratic forms does interact with the notion of orderings on the field, of a field. So if you have a field, then there's a well-defined notion of ordering on the field, such as the ordering, the usual ordering on the real numbers. And if you have an ordering, then you can use that to, well, you can use that to help classify quadratic forms because for example, you can ask how many plus signs and how many minus signs are there. So if you have an ordering, you can use that to define a signature with respect to that ordering of any quadratic form. Okay. Yeah, that makes sense. Thank you. Yeah, but so actually, yeah, so I should have, well, I will say more about this example tomorrow, but this example, I mean, this is sort of a really fundamental example of a quadratic form because it sort of looks the same over any field, this example of the hyperbolic plane. And yeah, but in particular, it's spanned by two isotropic vectors whose top part with themselves are zero and whose inner product is one. Sorry, so there's something in the chat. Yes, so I will share them in the auditorium in SoCoCo after this meeting. Yes. And I will also send them to, so I don't know if there will be another space for them on PCMI, but at the very least, I will put them in SoCoCo after this meeting. Okay, I'm gonna sign out. I'll be in after your problem session in my office. Contact me if you'd like to chat. Thanks a lot. See you tomorrow. Yeah, see you. Yeah. Okay, so there's a question. R sub B, yes, right. I mean that you take this map, R sub B, this reflection that was defined you restrict it to the line through V. So it's multiplication by minus one on the line through V and it's one on the orthogonal complement of that. I have one more quick question. So when you wrote down the theorem about diagonalization, right, that if you have any quadratic form, you can diagonalize it. You didn't do a say what you do if you're talking about Gram-Schmidt where you'd start with any arbitrary vectors and you would start orthogonal projecting. And I was thinking that the reason might be that in the process of doing that, you might say get vectors. When you subtract off the orthogonal projections, you might get a vector with norm zero, so you couldn't renormalize, right? Is there something along those lines? Are there some sort of like, I don't know, is there some kind of criterion for when you could say apply the Gram-Schmidt process to a set of vectors or something along those lines or is it just, I don't know? Yeah, sorry, I guess it is basically a Gram-Schmidt. I mean, you start with the vector, then take the orthogonal complement, then pick a vector in the orthogonal complement and take the orthogonal complement in there and then keep going in vectoring. It's just that you need to always, at each stage choose a vector with non-zero self inner product because not every vector will have the property in general. But I mean, so if you have an inner product space, so the point is that if you have an inner product space, so since we're in characteristic not two, so thanks, if you have an inner product space in characteristic not two, then you can always find a vector, a non-zero vector, whose self inner product is non-zero. And so you start with there and then you take the orthogonal complement and then you sort of keep going. Okay, yeah, so it is basically, yeah, sorry. So what I was thinking exactly is being analog to the Gram-Schmidt process is if I give you a list of vectors, let's say they're linear independent, then you could find another list of vectors such that the span at each step is equal to the span at each step of the first set of vectors that you gave me, but this new set of vectors is orthonormal, right? So I guess you can't do that because you might be starting with an isotropic vector. Sure, okay, but for instance, if let's suppose that all the vectors are not isotropic, would it then be possible? It seems like it may not be possible still because in the process of Gram-Schmidt, so you subtract off some projections then you re-normalize over and over again, right? Maybe subtracting off and subtracting off the projections, maybe you get something isotropic again. I think that's right, yeah. Yes. Is there any sort of nice criterion by which that, in which you could, I don't know if it's really of any interest anyway. I don't know if it's something anyone would care about, but the question was that there's any sort of nice criterion when- It's a nice criterion. I'm not sure if there's a nice criterion. Yeah, right, I mean, sorry, if you're so right. So one thing that, I mean, if you're in an anisotropic space, so a space where all non-zero vectors are not isotropic and non-zero soft dot product, then you can do this, as you said. But, and so in general, what's going to happen is that, so right, so I guess I will explain this next time, is that whenever you have a quadratic space, you can always split off a bunch of copies of the hyperbolic plane. The hyperbolic plane is somehow not that interesting, it looks the same over any field and so forth, and something anisotropic. And that's sort of unique up to isomorphism. And so then maybe if you have something anisotropic, then what we said, you know, the usual ground management is going to work just fine. But for example, if you have something hyperbolic, yeah, I don't know, there's some, yeah. That makes sense, definitely. Thank you. Thanks for the question. Sorry, yes, there's a question. Our vectors, yes, everything is finite dimension. Yep. Yeah, so I mean, I guess you can also develop some of this theory or that is a theory if you're not necessarily over a field, but over some sort of more general commutative range as well. So some of that will come up in Dustin's lectures when we consider sort of things over the integers, sort of the rational numbers. But yeah, we're going to be working at finite dimensional spaces. Out of curiosity, is there a sort of general outline of what plan for the three weeks is going to be? I know the main results that were on the website were Possum and Kowski, I think the Siegel, I think Quadratik-Russ-Prosby via Milner-Feith theory and the Siegel mass formula. Are those roughly going to be the goals at the end of each of the first, each of the weeks or, I don't know, or maybe I missed it. Maybe there was a more general plan somewhere. I mean, I guess, yeah. I think probably there'll be some adjustment on the fly as well, but I think the plan is that for the most part, what I'm going to do in the first week and a half, it's going to be more about the general algebraic theory of Quadratik-Prosby. And I'm going to do most of the ingredients of Hassum and Kowski. And then when Dustin takes over, he's going to start by proving Hassum and Kowski. And so Quadratik-Russ-Prosby is also sort of going to come out along those lines because when we set up sort of Nelner-Feith theory and so forth. And then in most, I guess in the bulk of his lectures would be the more sort of analytic stuff about binary Quadratik forms over the integers, which I think will have some special cases of the Siegel mass formula. That's sort of the rough outline, but not completely, there'll be some adjustments on the way. Let me think, thanks again. Yeah, thanks. Any more questions and comments? There is this question in the chat. I'm not sure if that's already answered, right? It's from Rachel. Oh, sorry, yes, I do not see. Right, so is there a reference? Taxi, that's a good question. Like what's a good reference? I think there are a lot of references for this material. I mean, in general, one of the classic references is the book by Sarah, A Course in Arithmetic. I think for the general algebraic theory of Quadratik forms, there are many sort of textbook sources. For example, there's a book by Lam, and one book I found out was by Sharlau. But let me also, yeah, so in general, I think there are like for Hassan and Kowsky in particular in the, yes, yes. Yeah, I think there was an Arizona winter school, for example, so virtual Arizona winter school. And there are some lecture notes by Charlotte Chan on Hassan and Kowsky in the Quadratik form, so that's also a good source. Can you please write those sources down in the chat? Sure, I can try to share and I can write. Yeah, so I think some of the classical references are like Sarah's. I think there are books by Lam and Sharlau. Oh, I should also mention there's a book by Milner and Hussamuller, and then there's the Arizona winter school from 2020. So those are some references that we recommend. Oh, so there's a question I didn't answer earlier. An example of a problem about Quadratik forms would be simpler because we have lots of symmetries. Yeah, that's a good question. So I mean, I think, right, so if you look at higher degree forms, then there's not really a simple classification of them even over an algebraically closed field. Just because, well, I guess what happens is there are lots of Quadratik functions, but also Quadratik functions have lots of symmetries and lots of isomorphisms between them. So there's actually only one isomorphism class of Quadratik forms over an algebraically closed field. And so, but if you have higher degree forms, somehow there's too many of them. So there's also sort of modular over an algebraically closed field. It's going to be more complicated. But so Quadratik forms because they all look the same over an algebraically closed field, then somehow if you're trying to classify them or say something interesting about them, see if they're rational numbers or over some field, then that will be closely tied to the arithmetic of that field. Like, for example, questions like in that field, how you can write things as some squares and so forth. And so, for example, it turns out, I guess probably I would say something about this later this week or early next week, which is there's a very famous, what was a conjecture in the theory of Quadratik forms called the Milner conjecture. And it relates, well, so first of all, you can sort of, so I guess as I will explain next time, you can, in classifying the theory of Quadratik forms, you can sort of hook up a ring called the bit ring of a field, which is encoding so isomorphisms of Quadratik forms. As opposed to the causes of the Quadratik forms. And there was a famous conjecture of Milner that relates this ring, or rather the associated greater terms of this ring to, on the one hand, Milner-Kate theory, which is gonna place a row in this course, and also Galois-Kohmology, which I guess I will not say much about. And so that was proved by Orlov-Vyshek and Bogotsky. And so in particular, there's some sort of deep connection between the theory of Quadratik forms and like the Galois of the field. Yeah, so thanks, Kredi, for writing that in the chat about, yes, that are references. Professor, I think now you made a comment on higher-animation forms. So how was the different, are they analogously different than the Tridentia forms that you have assembled? How are the higher-animation forms? Yeah, I mean, for example, if you have like a cubic form on a vector space, or yeah, so as you said, something like a, I guess a symmetric trilinear form on the vector space, which you can also think of as a homogeneous, you know, so think of that as like a homogeneous cubic function on your vector space. Yeah. So, I mean, symmetric means I can change all three in any way and I can have the best theory on it. That's right, yes. So then, I mean, then you'll get into like a problem, like an algebraic geometry problem, which is if you're over an algebraically closed field, you know, what are like isomorphism classes of cubic forms? Whereas in the theory of quadratic forms, there's only one isomorphism classes over an algebraic closure. So somehow it's really all about like the field. So I think the problem might be that we will probably be able to back this. Because if we are, so the problem, I guess might be that they might not be even an active presentation for my time. So we might not be able to have a straightforward on what's going to be quadratic forms. Right, so there's not going to be some analog of diagonalization. That's right, yeah, that's a good point. I mean, you could, yeah, you could imagine that it comes from some like, yeah, well, yeah. So, yeah, right. So there's no, there's no analog of diagonalization. Yeah, thanks, that's a good way of saying it. So the signature of the quadratic form where you have like the ones and the negative ones in the real case, that seems to kind of come from the fact that like negative one, that you need to join a root of negative one to get the algebraic closure. So I was wondering like, given any number, can you have, can you find a field whose algebraic closure is degree that number? So you can have like different types of, so you ask about quadratic forms over that field, you'll get an analogous like answer. That's a good question. But actually the question has a famous answer. It's the theorem of, I believe Arden and Schreyer that, I mean, so in general, the only cases in which you can have a field whose algebraic closure is finite degree over it is a case of real closed fields. So things that look like the real numbers. Another example would be the real algebraic numbers. And so then you get to the algebraic closure by adding a square root of any negative, minus one or any negative number. But yeah, so in general, if you have an ordering on your field, then, or say if you have a real closed field, but you can specify a real closure of a field by specifying an ordering on your field, then you can define a notion of signature with respect to that order. Yeah. And so in general, there'll be like many different signatures. So for example, if you have the field that's like Q adjoin square root of two, then there are gonna be multiple different orderings because there are two embeddings of Q adjoin square root of two into the real numbers depending on which square root of two you choose. So if you have a quadratic form over Q adjoin square root of two, you can choose at least two different ways of defining two different choices of signature. Oh, can I ask a question? Yes, please. So for quadratic form, you said there's a matrix associated to it, a symmetric matrix associated to it, right? So does the diagonalization of the quadratic form come from the diagonalization of the matrix? Or is that? It's not quite. Or is that in any way related? Well, so you're not quite diagonalizing the matrix in the usual sense because you're diagonalizing a quadratic form. So right, so the question to ask is if you have a change of basis, how does, right? So if you have a basis, if you're an inner product space and then if you have basis, you get a symmetric matrix. And then the question is, how does that, how does that change if you vary the basis? And it's not quite the conjugate of the matrix. It's gonna be like you multiply by A and then you multiply by A transpose on the other side. So I don't know if she shares this. I mean, so if you have a symmetric matrix A and then you choose a change of base as matrix B, I guess depending on your conventions that the new quadratic, the new symmetric matrix is gonna look like B transpose A times B. And so then the claim is that that's, you can put that in diagonal form. So that's equivalent to the statement that like a quadratic form can be diagonalized. Yes, so yeah. I just popped right in and I heard that last question. Just practically speaking, if you have the matrix, what does it mean to diagonalize it as a quadratic form? It means you do row and column operations, but each time you do a row operation, you do the corresponding, the same column operation. That's what Achille just said with B transpose on either side. That means you are doing the analogous operation. So you can play around with matrices and just try to reduce them to diagonal form where each time you do a row operation, you do exactly the same column operation. All right, yeah, that's it. Any more questions or comments? Hey, well, if not, I will, but I hope you all go to the TA session and I'll try to post my notes in Sikoku right away. And otherwise I will, yeah, I will see you tomorrow. Bye-bye.