 Today, I decided that I wanted to talk about two problems that I find very fun, one of which I worked on for a bit, the other one which I've done a lot of looking into, but haven't actually worked on it. I know that I listed three problems in my abstract, we're only going to focus on the first two. So the two that I want to focus on are the following. One, how many vectors can you have in d-dimensional space such that among any triple there is a pair that are all the type of question that I want to look at. So essentially saying I want to pack as many vectors as I can into d-dimensional space such that they're fairly orthogonal from each other, but not completely orthogonal. And the second question I want to ask is how many lines in rd can there be that pass through the origin and cross at the same angle? So every pair of lines we can just look at the angle at which they cross, always take the one that's less than 90 degrees. And I want every pair of lines to cross at the same angle, these are so-called angular lines, and I want to know how many of these I could possibly have. Well, so let's hop into the first question. So first question, let's see here, let's call this, call this number, I don't know, how about t of d? That makes sense, because essentially what we're doing is we're looking at a triangle free graph, we're looking at embedding a triangle free graph into d-dimensional space. In some sense, well what we could do is we could let g be the graph on the vectors, and v, where it's an edge of g, if and only the inner product of u and v is not so. So essentially we could draw a graph of these vectors that's kind of the non-morphogonality relationship. We'll put a line between two of these vectors if they happen to not actually be orthogonal to each other. So what are we really asking? Well we want g to be a triangle, hence the t of d. We want to know how many of these vectors we can embed into r d, such that if we draw on this non-morphogonality graph it has no triangles. Well this kind of leads us to a silly upper bound, doesn't it? Well, look at the independence number of g of d. Well, I mean if we have an independent set, that corresponds to a mutually orthogonal set, right, pairwise orthogonal guys. So an independent set in g is pairwise orthogonal, they are perpendicular. So actually because we only are dealing with d-dimensional vectors, we know that the independence number of g can be at most d plus 1. We can't have d plus 1 mutually orthogonal vectors inside of this. Well so now we get to apply some Ramsey, right? We know that g does not have a triangle and it doesn't have an independent set of size d plus 1. So we know that t of d, just kind of trivially, is at most the Ramsey number 3 d plus 1. Because once you're bigger than this, you're forced to either have a triangle or have an independent set of size d plus 1. So we get a bad upper bound, r 3d, this is, let's see, r 3d is on the order d squared over 1. So we know right away that we have to have some quadratically many. Well, there's an easy lower bound on t of d as well. All we want to make sure is that if we pick any three of the vectors, two of them are orthogonal. So one construction we could do is say, well, take two orthogonal bases. Well if we pick any three vectors from two orthogonal bases, well two of them have to come from the same orthogonal basis and are orthogonal. So we get a lower bound of t of d is at least 2d. So we know the answer is somewhere between linear and slightly less than quadratic. Fantastic. Around a little while, let's look at a small case here. What's the answer in two dimensions? Sometimes small cases are useful. You know we could have at least four vectors, right, two orthogonal bases. But in fact, once we have at least three vectors, two of them have to be orthogonal, right? So without loss of generality, we could say like e1 and e2 is inside our set. But now if we have at least two other vectors, they can't, they have to actually be mutually orthogonal. Say that I had two other vectors which weren't orthogonal, not 90 degrees, then in fact, there's one of these guys that they're also not orthogonal to parallelize. So we can take this vector, that vector, that guy, and have a triple that doesn't have an orthogonal parallel alignment. So in fact, the only thing in two dimensions you can do is take two orthogonal bases. So we know t of 2 is actually exactly 4. Well let's try to bootstrap this argument a little bit. We actually notice the following thing that if v, these are seven vectors has an orthogonal basis. So we see one set of one orthogonal basis for the space inside there, then in fact it must be exactly two orthogonal bases. By the same argument, pick two vectors that aren't orthogonal, that's not in this basis. There must be one guy in this basis that isn't orthogonal to either of them. Then v must, in fact, by the same basic argument. Wait, how does this, in higher dimensions, how does this go? Well, we can ask the same question, fix your orthogonal, by the assumption we have an orthogonal basis, right? Well, we know that we have some other number of vectors, at least, you know, d others, right? Well, if it's not a orthogonal basis, that rest of them, pick two of them which aren't orthogonal. Wait a second, this is a basis for my space, right? Can both of these be, sorry, cannot, can all of these vectors be orthogonal to one of them because they're both not? It's a bit of a geometry argument, you can do it by, the real way to do it is to write those two as linear combinations and in the orthogonal basis, which you can do by just projecting them onto that space, and then you get a contradiction just by writing things out. I'd rather not do it. But that's, that's how the formal argument goes, but it's essentially the same idea is that if I have two of these vectors, I can't actually find any, why I can find one vector among the other basis, that's not orthogonal. Well, so actually this allows us to figure out the answer to three dimensions. Why? What's R33? That's six, right? Okay, so if there are, so if V is say in R3 and has at least six vectors, then G, the non-orthogonality graph that we draw on V, either has a triangle or a basis, or an orthogonal basis. Well, we know it doesn't have a triangle, so it must have an orthogonal basis, which means that everything not inside of G must also be an orthogonal. So everything not inside of this orthogonal basis must also be an orthogonal basis. So the only thing that we can have as well is Q3 is six. And we've actually shown that, well, in two dimensions, the only thing you can do is take two orthogonal bases. Additionally, in three dimensions, the only thing you can do is take two orthogonal bases. We can ask the same question about four, but I'm not sure what the answer is. My guess is that the answer is still the same, for D equals four, because R3 to four is nine. You can't actually get nine, and we'll discuss why that is. But my guess is that you only have to check and find like a few examples, and maybe you could find a person that says they're four dimensions. It also has to be two. Well, so let's make a conjecture. I mean, we did smoke cases, and smoke cases do everything by induction, right? All you need is a base case, and it works. The induction is always the easy part, isn't it? So conjecture. Certainly, the case for two also is true for one dimension, but one dimension is stupid. No one likes one dimension, except for other grads. But one thing that we can't notice is that maybe the only thing you can do is take two orthogonal bases. We showed that that was the case for D equals two, and D equals three. We did show it was the case for D equals four, but it feels like it should be close to being right, because the Ramsey number is nine, and we conjecture that we can only have eight factors. Unfortunately, this isn't true. This is something I started with. When I was trying to prove this problem, I started over a while trying to show that you had to have two bases. In other words, show that the non-orthogonality graph has to be by blood type. They're the same thing in this case. Well, unfortunately, in 5D, you can do other things. So let's construct 10 vectors that don't contain the orthogonal basis. One thing we could do is we could start with the vector. I'm just going to not write the two rules. I'm just going to write them. Let's say 1, 1, 0, 0, 0, and then we could do, you know, write them in kind of a circle for these. It's 1 minus 1 minus 1. Okay, so let's draw some edges if they happen to not be orthogonal. What we notice is that every pair that's right next to each other are in fact orthogonal, right? These aren't orthogonal in kind of a five-cycle fashion on the inside. Also, these aren't orthogonal again in a five-cycle fashion on the outside. And in fact, we have all these cross edges. This is a very poorly drawn graph. But essentially what it looks like is if you take a five-cycle and you blow up every vertex into an independent set of size 2, and then just carry around the edges. Again, this is hard to look at, but this graph has a five-cycle. It's not like hard type. So in fact, it cannot contain an orthogonal basis of this space. However, there is no triangle in this graph in the vector. So maybe you want to go and check it. But in five dimensions, and in fact, this works in any odd dimension, you can construct examples of 2D vectors that have this 2 out of 3 property, yet don't contain an orthogonal basis. That's a bit of a killer. So maybe this conjecture is not true. But it is. Right. So it's actually a theorem. Originally proved, but I wrote this down, Rosenfeld in 91. I was working on this. I didn't realize this had been proved, nor had I realized it was a conjecture of error issues for a moment. I did track down, after I had approved, Rosenfeld's proof. And it's not pretty. It's not fun. It's gross. I have a very nice symbol proof of this fact, who's ready for a nice symbol proof. Love, pretty proofs. Pretty proofs are the absolute best. Okay. So what are we going to do? We're really going to exploit this triangle free-ness. And then we're just going to do some averaging in the Cauchy sports, because Cauchy sports is the best in equality. So the first thing I want to do is I want to prove the following statement. So let's let n of v be the neighborhood of v. The neighborhood of the vector in the graph. So just the guys that it's connected to. And by a connection, that means it's not orthogonal to it. So the key point is, we don't have any triangles. So in fact, n of v is an independent set in g, simply because we don't have any triangles. So in fact, all of these guys in the neighborhood of any given vector are going to be mutually orthogonal. Fantastic. So let's write down one in equality. Let's let n star of v be the extension of n of v and orthogonal basis. So I mean, it's already orthogonal. We can just extend it to an orthogonal basis of r t. But now it's an orthogonal basis. Let's pretend everything is a unit vector from here on. Just to make our lives easier. Well, then we can write v. So we know that v, well, it's going to be equal to the sum over all u in this extension of the inner product of u with v times u. And that's just projecting v onto this orthogonal basis. Okay, fantastic. So now let's actually figure out what the length of v is, the norm of v. I mean, we know it's one, right? We pretend everything was a unit vector. So this is the norm of v squared. But now, this is just v dotted with v. So we can write this as v sum of all u in the extended neighborhood of v of u v times u, which multiplying across is just the sum over all of u in the extended neighborhood of v of the inner product of u with v squared. Fantastic. And now, well, these are all non-negative numbers. Let's just forget the things in the extended neighborhood and just look at the guys in the original neighborhood. So in fact, if you add up the inner products of v with all of the guys in the neighborhood square, you're always going to get it most point. And this is just nothing interesting. We just use the triangle free property to guarantee that this guy was in fact mutually orthogonal. So now, let's look at the following. So we need the following inequality. So the claim, just little inequality, short proof. Let's let a be an n by n matrix symmetric. The trace of a square is bigger than or equal to the trace of a square divided by the negative. So we're going to need this inequality. It's the crucial point in our proof. Yes. No, this may not actually be equal because this extension to a basis, it may not have anything to do with our original set, right? So you know, this should just be bigger than or equal to because we bought some vectors and those could have been strictly positive. There's no reason we couldn't be, right? We took the neighborhood of v, we blew it up to an ortho, to a full ortho basis, right? We have no idea how v interacts with the vectors that we threw. No, the basis isn't actually in the set. We just took its neighborhood and extended it to some orthogonal basis just so we could get this. Without the extension, we're just, it's just projecting v onto the subspace span by the neighbors. Yeah, essentially. But that's hard for me to think about. This is easy for me to think about because these are not negative. So you can throw them away. Maybe that's a better way to think about but this is going to be a key inequality right here. But this is just though she's so let's give a quick proof. I love this inequality because it's super useful and has worked well for me on lots of different problems. Well, what's the trace of a squared? Well, this is just the sum of the eigenvalues squared. So lambda 1 squared all the way out to lambda squared of the length, right? The rest of them are going to be zero. Fantastic. Now we just apply Cauchy Swords. The sum of the squares is at least the average, the average, sorry, is at least one over the number of these guys which is the length of a of lambda 1 out to lambda rank n. But this is just the trace of it. Fantastic. Really short and sweet and it's just Cauchy Swords. So that's going to be good. Yes. My guess is that you probably should be able to get it from some sort of functional inequality. I mean, because really you're just kind of looking at symmetric form. I don't have another explanation. There should be another one by looking at the matrix as a function because this inequality is actually true even if you have this inequality still holds but I'm not quite sure how to prove it and this is a very nice clean proof. Yeah. Wait, so just to remind me because my line of algebra is rusty. So since it is symmetric, we know that all the eigenvalues are going to have their real name multiplicity 1. No, these may be some of the same. Okay. I'm just saying like, I mean, what's the trace of a matrix as the sum of its eigenvalues? The eigenvalues of the square of a matrix are the squares of the eigenvalues whether you're symmetric or not. But because all of them are real, now I get to use like inequality. But this is going to be important. Now bootstrapping it with this guy we're going to be done. So, big proof is we're going to look at the sum over all u and v inside of our big set of vectors, the inner product of u and v square. We're going to provide an operating order about these first using this. Let's call this an equality stuff. One thing I could do is I could write this as the sum over all v and v, sum over all u and v of the inner product of u and v square, easy enough. Now let's see here. This is going to be the sum over v and v. We get a 1 when v is with itself plus, but v is orthogonal to anything that's not in its neighborhood. So we're going to end up getting a zero most of the time. So this is actually equal to the sum over all v and the big v of 1 plus the sum over all the u in the neighborhood of v, inner product of u and v square by star. This is less than or equal to the sum of two, which is twice the number of vectors. So that's an upper bound. Now we're going to use this guy for a lower bound. So lower bound, let's first notice that let's let a be the grand matrix. What that means is that we're just going to list all of the inner products. So I need a u v is just going to be the inner product of u and v in our racist inequality because we're done. One thing to notice is that the sum over all u and v in our vector set of u v square, this is just equal to the trace of a squared. Fantastic because it's symmetric. Okay, well now we can use this inequality. It's bigger than or equal to the trace of a squared over the rank of a. One more inequality, but you would know what the trace of a is, right? It's just the size of v. So I get the size of v squared. Now what's say an upper bound on the rank? Well, they all all of these vectors live in d dimensional space, right? So the rank of a is at most d. Fantastic. So what do we know putting these two things together? We know that twice v is bigger than or equal to v squared over d, which is the same thing as saying that v is at most d. It's very pretty and very simple. Rosenfeld's original proof was not pretty. For some reason he decided to look at the trace of a cube of a and do all these weird inequalities with like triples of eigenvalues and it was disgusting and I don't understand why it works. I understand why this works. So in fact, we now have a full answer to you be very nice and simple. So one thing that I like to say about this is there's a natural extension of this type of question, just normal Ramsey questions in this field, which is how many vectors could you have? It was an orthogonal pair. So we just asked them, I mean, if three of them there's an orthogonal pair, what if you just extend this natural Ramsey sense to any k of them, right? So now we're avoiding k cliques and independent sets of d plus one. Well, you can't go for any nice answer. I'm not going to prove this at all, but a theorem by Alon and Zegetti in 99 showed that actually let's call it a t, so k of d. They showed that there exists some positive delta such that for every k that this is always bigger than or equal to, and I wrote it down, d to the delta log k over log log. So in fact, it grows polynomial. Or as k goes to infinity, it grows exponential, whichever way you want to think about it. This delta is actually meant perfectly to balance out what happens when k is the reason. Yes? It's when we can achieve the maximum, because the second part of it seems to be easy to make the inequality to be equality. Right. So you need all the value values to be the same. Right. And for the first part, it seems that in order to achieve to the sum of like u, v and product square, it should just be equal to one. And that means we're on the issue of that. It means that it would mean that v would have to be orthogonal to everything that's not in its neighborhood, right? Which is exactly the situation that you get into when you have two orthogonal bases, right? If I fix one vector, its neighborhood is exactly the other basis, and it's orthogonal to everything that's not in there, namely the first orthogonal basis, right? And then if you have an orthogonal basis or a pair of orthogonal bases, let's verify that this works out. I guess you don't quite, I guess it's not so easy to see that all the eigenvalues are the same in that case, is it? But yeah, I guess for the second part of them, all the numbers on the horizon are easy to see. Yeah, I guess both of them. But now the sum of all d's is that one, I guess with the orthogonal basis, you could argue. I'm not sure how you'd argue with the other example that I gave, and I'm also not sure how many types of examples there could be that don't. My guess is that there could be tons of them. Like I gave one example of something that's not a orthogonal basis, two orthogonal bases, and that works in any odd dimension. My guess is that you could come up with lots of kind of group actions and build a graph out of those, and it'll be gross. So my guess is that you won't be able to get any real structure out of this. At least I tried to at first, but it's not true. Yeah. I mean, I actually came up with this proof first and then realized I can't always have two orthogonal bases. I came out of it by trying to analyze what just happened to this guy and realized that there's no reason that it doesn't help with anything. Questions? All right, let's go into the second problem. Since this was a fun problem, I like this problem a lot. You have lots of vectors for k larger. There's also an upper bound. I forgot to write it down, but it just doesn't seem quite as interesting to me at least. There's lots of weird things going on, and I don't understand what should be going on. Okay, so the second question we wanted to ask, how many lines could we have in d dimensions, such that they all cross at the same angle? The same question is asking, how many unit vectors can you have in rd, such that for every, I guess, u not equal to v in your collection of vectors, the inner product of u with v is always plus or minus alpha. Let's just normalize alpha to be the positive one, so that the absolute value. This is really the question about these equiangular lines, right? Because for any line, you can take either it's plus or it's minus, and you should normalize it to always be taking kind of a smaller angle. One dimension is easy. One dimension sometimes. I always said it was only for undergrads. Sometimes it's easy to think about. Um, I don't know. Does everyone have an eq of d, equiangular? It's called this number dq. Okay, well, eq of 1. How many lines could you have passing through the origin of 1d, such that they all intersect at the same angle? Well, there's only one line and one dimension. That's easy. Maybe the answer is always 1, that would be great. I claim that we can get at least constructions. This is at least three in two dimensions, and this is easy to see. Let's draw a regular hexagon and take the lines, we have colors. Let's take the lines that pass through opposite vertices. You take those lines, they all intersect at the same angle. So you can get at least three of them in two dimensions. eq of 3, you can get at least six. And this is the same construction with the opposite vertices of the icoset. There's one other nice one. eq of 7 is at least 28. Hey, now, just to be a bit random, let's give a construction. Let's actually look at eight dimensions for a second. Let's consider all of the vectors v in r8 with, let's slide it on, vij for i, j, an element of bracket h choose 2, note that h choose 2 is equal to 28, with 3 in position i and j and minus 1 elsewhere. So for example, like v3, 4, v the vector minus 1, minus 1, position 3 has a 3, position 4 has a 3, and minus 1 has a 4. claim that these are actually equiangular. Note that there are again h choose 2 of them, which is 28. Well, if you take the inner product of any two of them, let's not worry about them being unit, well either the 3s completely miss each other, right? In which case, the 3s wind up canceling out, and you'll get 1, you'll get 4, correct? Okay. Otherwise, they share 1, 3. In which case, you're going to get a 9. The other 3s are going to cancel out, and then you're going to get 3s not canceling out. Oh, no, the 3s both. Oh, yeah, right, you get a minus 6, right? Fantastic. So you get a 9 minus 6, okay, and then plus just 1, right? Because we each 3 will contribute a minus, so it minus 12. No, no, no, you have two of them lining up, right? So you get a 9 there, 3 times 3, and then the other two 3s are somewhere else, so they both contribute a minus 3 because they're multiplied by a minus 1, right? Then we're getting a plus 5, plus 5, okay, so what is this? This is 4, right? Yeah, so if the 3s totally miss each other, we'll have a minus 12 contribution. Yeah, it'll be a minus 12 plus 4 in the other case. Minus 12 plus 4, so 1 is minus 8 and the other is plus 8. Okay, great, but they're the same. I can't do arithmetic. I can't do arithmetic, okay? I'm really bad at this, but let's pretend they work. Okay, but you should be unhappy with me because I said that they should be in R7, not R8, but now you notice that they're all actually orthogonal to the all-ones vector, right? So really they live in R7, so really these live because they're all orthogonal to the all-ones vector, the sum of any one of them is zero, so in fact we can just kind of rotate the all-ones vector down to wherever you want it to be, but it's easier to write them down. For examples, fantastic, it turns out that EQ of 4 is also at most 6, I mean clearly it's monotone, thank you. I claim that all of these are in fact equalities, although we won't prove this one, this one, I don't know why it's true. It just turns out that you can't do any better in R4 than in R3, so let's ignore that. But let's actually prove that all of these are equalities, so instead what we'll actually do is we'll show the fault. What do we call this, EQ? EQ of D is always at no value. Not that all of these have a value, so let's prove that this is a general look around. Unfortunately, this overbound, it's not known whether or not it can be tightened to the field. In fact, there is only one other case as far as I'm aware, which is like, I think 28 or 29 or 27, one of those two, one of those three, where again we can get construction and it's gross, but as far as opposed to those ones, I'm pretty sure that no one knows whether or not any others exist that are cheap. But let's give a proof. And the proof is very pretty, very linear algebraic, and I love it. So here's what we're going to do. Let's let x1 through just some xn, we're going to get it down a little bit later, the unit vectors in the dimensional space, elements of the sphere, minus one, that are equi-angular, so let's suppose with, let's say that the other one has some value, and the other product of xisj is obviously equal to alpha when i is not equal to j. They're all unit vectors, so they have a better product form than stuff. Okay, we're going to use a rank argument, a dimension argument. And here's the claim, the projection matrices, x1, x1, the transpose, typically gives you the upper bound of d plus 1, choose 2. Namely, these are all symmetric matrices, they're linearly independent, so we can have at most the dimension of the symmetric matrix space, which is d plus 1. Fantastic. So if we can show this, we get the upper bound for free. Everyone happy? Who people unhappy? This is very clever, and I love it. Okay, well let's show that they're linearly independent, I mean, come on, freshman algebra, right? Okay, well suppose independent in what space? In the space of symmetric matrices. The space of symmetric matrices, where you just look about for the diagonal? I mean I don't care, these are matrices, right? Okay. Right, I mean x1, all the matrices. So it's a, it would be a dividing matrix, and it's symmetric. So we're just going to show they're linearly independent, so we know that, well, their dimension, you know, the number of them can't be larger than the dimension of the space that they could spend, that they live in, right? Which is the symmetric matrix space, d by d, which has how many free variables? We get to pick everything, 1, 2, 3, 4, up to d, which is d plus 1, choose 2. Is everyone happy? All we have to do is show that they're linearly independent. Well, suppose that there exists some c1 up through cn with, you know, normal jazz, c1 x1 x1 transpose, up to cn xn xn transpose equal to zero. We just have to show the number equal to zero, root, we couldn't have done anything other than the skewed effect. Well, let's get on wanting to call, I may not do this the best way, I forgot to read the paper, so I'm just going off of how I did it once. So we might not do the best way to do this. Let's take the trace of both sides. Okay, so let's apply the trace to both sides. Well, what's the trace of x1 x1 transpose? Well, that's the trace of the, that's the inner product, right? So it's 1. So if I take the trace of both sides, we get that actually these c's actually have to sum this up. So this is actually an alpha. Let's do something more. Why don't we square it? So what is the trace? Well, we know zero is equal to the trace of this thing. So if we just write it out, this is the trace of the sum of all line in j, c i, c j, x i, x i transpose, x j, x j transpose. Let's see here, what do we want to do? Well, this is the same thing. If we pass the tracing, it's the sum over all line in j, c i, c j times the inner product of x i, x j. Let's separate some things out. So this is certainly the sum over i of c i squared, right, taking out all the diagonal terms. But now for all in the off diagonal, this is alpha plus or minus alpha. So we get to square it. Okay, can we sort out a bit? Mm-hmm. Why is the trace of that thing the inner product? Well, x i transpose times x j is the inner product, right? That's the inner product. Okay. And now trace on, you can simply, you can simply show me this way though. So we just get x i transpose times x j, x j transpose times x i, that's a scalar, we get to it. So yeah, we're just using the trace to shift everything. Now we get the sum over what i not equal to j, c i, c j, alpha squared. Because all of these, if i is not equal to j, it's plus or minus alpha, the inner product, inner squared. You look unhappy at it. You can, the trace is cyclic, even when these, because that's, x i is a vector, it doesn't belong in the right space. No, it's actually perfectly fine. You can do the cyclic thing, you can do it with any matrices. Any matrices, yeah, any matrices. As long as you're allowed to. I mean, the trace has to be done on the square matrix, okay? So as long as the product both ways is square, you can do it. And the product both ways is in fact square between vectors. One way it's, and by n, one way it's one by one. So yes, it's actually perfectly fine. What are we going to do from here? Let's figure it out. I know I figured this out. We could write this guy, let's see here, let's, I don't like the i not equal to j, so let's push it back in. So we can write this as a one minus alpha squared or sum over i and c i and j, c i, c j alpha squared, right? Just push an alpha squared into there. What can we do from here? I want to use this fact, so let's see here. How can we use this fact? Jody, you remember how I did this? Yeah, the thing that the right is times the square times the sum of the c i, so like everything is zero. All right, so let's just sum over i and then sum over j, right? C i, c j alpha squared. If we pull out the c i here and the alpha squared, we just get the sum of the c j's. But what's the sum of the c j's? This guy is in fact zero, so we get that one minus alpha squared times the sum of the c i squared is zero from over there. Wait a second, alpha is definitely not plus or minus one, right? I mean, otherwise we can cancel out, so this implies that it's most d plus one, two is two of them. Yeah, what's that? Okay, California white alpha can't be plus or minus one? Because that's stupid. If alpha were plus or minus one, right, then we want to make, then we can only have one line. Oh, okay. Because every vector has to be plus or minus every other vector, right? Because they're in a products one, which means you literally only have one line, and that's stupid. Right? We can do at least better than one line. We can do at least d lines, right, just by taking them on a horizontal basis. Ah, so they're most d plus one, two is two. Such a pretty result. So in fact, for those cases that we just saw, they're tight. What else is known about this number? I won't actually prove this because it requires a fussy in a lot with minimal polynomials and algebraic complements of algebraic conjugates, sorry. What's known is that if equality holds, then either d equals two or three or d plus two is the square of an odd number. It's really fussy and I don't like it. So in fact, this doesn't hold very often. I don't know, okay, sure, one. Make you happy. So it is known that it's pretty restrictive. d plus two has to be the square of an odd number, and it is unknown in general whether or not it's easy that you continue to be qualitatively often. No one knows. In fact, for a long time, and we might prove this result, I like this result, for a long time, namely lemons and Psyduck, this was in 73, and I believe this came out in 73 as well, I mean just a little bit earlier. They showed that eq of d is at least, let's just say omega due to the three halves, which isn't quite good. I mean, this is quadratic. This was the state of the art for a while. No one could beat d to the three halves. We might actually do this until recently. So this is a q construction using projective planes and how to mark matrices, but isn't too surprising. We kind of throw them together in a way and we use these structural results. But decon in 2000, this result is just kind of a miracle. We showed that eq of d is at least two ninths to the square of an infinity, and yeah, it's really short proof, but he piggybacks off of a lot of weird results. Basically saying, oh look, these algebraic combinatorialists have had these matrices floating around forever. We can just plug them in and it works. It's kind of really amazing that no one had noticed this connection earlier, but they're pretty crazy matrices that I don't understand. But it's kind of cool, we now know it's quadratic. But unfortunately, as far as I'm aware, this is state of the art. Any improvements on this, my guess is that it's improving the infinitely often condition. I think d has to be related to a power of four somehow again. How much time do I have, Kevin? Time to go through a construction. Construction, so is there any, so there's new state of the art on the sort of limb soup of this sequence. Is the like limb inf still just d to the three halves? No, so by infinitely often, there's a paper not too terribly long ago which showed how you can take a special quadratic construction, which I think in a special case is this. And you could actually use some sort of convexity argument to bootstrap everything up to some sort of curved thing. So everything is in fact quadratic. This constant is worse because you have to bootstrap it up in some way. I don't remember exactly how we're going to stretch it out and get the lower balance. Yeah, so everything is in fact quadratic. I don't know the constant they get, but essentially it's a curved fitting. I read the paper, but I forget how it worked. They modified Ducan's construction using mutually orthogonal bases instead of what Ducan did. Got slightly worse things, but they got to the point where they could actually push it around a little bit into a little bit higher and a little bit, but I don't know exactly how it worked. Yes, I mean, great question. It is always better. So I think it's nice to go through construction. At least the construction of lemons inside, especially because I really like this construction because it let me solve one of my other problems by varying it a little bit. So what we're going to do is we're going to start out with a projective plane, let's say q equals a power of 2 to the i. Do I want 2 to the i? I'm going to instead say 4. The reason I'm going to do 4 matrices, do I need powers of 4? Yes, I do. Okay. No, powers of 2 aren't. Okay, whatever. 4 to the i, 2 to the i, 4, I think. Is it powers of 2? Okay, great. No, because 1 and 2 work as well, but whatever. It's not important. Let's say 2 to the i. Okay, so what we're going to do is we're going to do the following. We're going to write down the point line in this incidence matrix. What's the special amount of this matrix? What's the projective plane, right? We know that we have q squared plus q plus 1 points in lines, since the square matrix of order of q squared plus q plus 1. We know that every row has exactly q plus 1 1s. And same thing with every column. Every column has exactly q plus 1 1s. And lastly, we know that if you intersect any, let's say that these are the lines and these are the points. If you intersect any two lines, you get exactly one point. So any two of these rows share exactly one, one input. Well, this isn't very good. I mean, the dimension is q squared plus q plus 1, right? We're not going to do any better, but we want to blow this thing up. All right, so how are we going to blow it up? Let's take a Hadamard matrix, or q. Hadamard matrix, it's one where h i, h transpose, let's say that this is equal to q times the Hadamard matrix. Square matrix, it's plus or minus one matrix. So plus or minus one matrix. That happens to be orthogonal. So what are we going to do? Let's erase this, not h i, h. Well, let's look at h, and let's look at its columns. Let's say the first column is h 1, down to the last column is some amount of h. We're going to go to each of these ones in a row, and we're going to throw in the corresponding column of h. So this is, say, the first one in the row. So you're going to replace this one by the column of h 1. You're going to replace the second one by the column h 2. So on and so forth, third column by h 3, fourth column by h 4, and now you have one left over column. It's just going to be all ones. Any zeros just get blown up as well. I'm trying to remember if we have to avoid an all ones row or not. Yes, we do. So the first row, let's assume that this is all ones, which we can do, and then these guys don't have actually that first row. So we never get a whole row of all ones here. There's always some alternating between plus and minus ones. The claim is that the rows are actually at wing. So first thing, there are roughly q cubed lines living vectors, living in r t to q squared, roughly q squared. So this will give us our d to the 3 halves construction. q cubed living in q squared, that gives you d to the 3 halves. So we just have to verify that they are effectively. So let's check. Does every row have the same norm? Yeah, as exactly q plus 1 non-zero entries, they're all plus or minus 1. So every row actually has normed q plus 1, right? Well, square root of q plus 1 as exactly q plus 1 non-zero entries, and those plus or minus 1. So each. So I'm not really going to care that they're actually unit vectors, but they all have the same norm. So as long as we can verify that they all have the same inner product between them. Well, what happens if I take two rows and take their inner product? Either these two rows came from the same kind of blow up here, right? What happened in this row? Well, take this row and this row. But wait a second. They're going to agree on their first q entries, right? So they share the same ones or minus ones. But these ones were replaced by a row of the Hadamard matrix. These ones were replaced by a row of the Hadamard matrix. Those two rows are orthogonal. Yes, they're different rows. Okay, so these all zero out, and we just get a plus 1. So the inner product is equal to plus 1 if, in fact, if rows came from the same blow up. They came from a different one. We're just going to use the following fact that came from the actual projected plane. If these came from two different blow ups, then in fact shrinking them down, they came from two different lines of the projected plane. So they intersect in exactly one point. Namely, there is only one point that they share in common where one of them is not zero. Now, that's either they alternate in sign or they share the same sign. So it's always going to be closer to this. I guess looking at this, I didn't care whether or not that first row is that we all once were done. I guess it didn't really matter. I thought it. But either way, all of the inner products between these guys are plus or minus one. They all have the same norm, so they're equi-angular. How many of them? Well, the dimension is Q squared. So D is Q squared odd, so they are D to the three halves of them. We have a nice construction. Very pretty construction. We have the question of whether this is 2 to the i of 4 to the i. It's just to make sure that this Hadamard matrix exists, and I forget when today it actually exists. It's either powers of 2 or powers of 4 that it all powers of 2. Powers of 2. It's powers of 2. Okay, good, you know. Great. I always forget. But anyway, it's nice. Hopefully these problems are fun. If you can improve equi-angular lines, that'd be fantastic because it's really a bitch that we don't know. Even if this is achieved infinitely often, it's so annoying. Any comments, concerns, hateful remarks? So these finite instances where the value is achieved, it's just totally ad hoc. It really is. There's no rhyme or reason to it. It's really a disheartening. Yeah, I forget where the 28 dimensional one comes from, but it's some, I think it's some weird association scheme that it comes from. But yeah, so there, the main, at least there was a big push to find equi-angular lines, which was essentially trying to find actually where sidal matrices come from is this problem, where you have a graph, you put plus ones, so you put a zero on the diagonal, you put plus ones where there's an edge and minus ones where there's not an edge, and you want to control the eigenvalues. And these matrices, I guess they got picked up by some other algebraic people and they liked them for them, but they were originally for this purpose as far as I'm aware, because you can get balanced regular lines by these plus minus one symmetric matrices. Yeah, are there any known values beside those cases? There are lots of them. So a lot of small cases are known, and they're not very good. For example, I believe there's a pretty large interval around seven dimensions where it's all 28, at least above the seven dimensions, like I think between like seven and maybe 14 or some dimensions, it's all 28. Yeah, you can't do anything. You can look it up on OEIS, or if you look at the original lemon cycle paper, again, it's from the 70s, but they have a list of all of the currently known best results for small dimensions in that paper at that time, but that's, again, three years ago, and I don't know small results here, except for four, which is still six. You can't do better than the Yeah, exactly. So what's the motivation of this? It's pretty. Is there any reason other than it's interesting? If for some reason you cared about a matrix, a plus minus one matrix, symmetric, that only had two eigenvalues, and they were fairly big, and stuff like that, then the executable lines are equivalent to these matrices. I'm not sure people actually care about that. So I don't know where the interest farmed from, but few people just don't work on it. It's kind of expanded into at least the commentaries community a little bit. It's still a relatively small problem, not too many people think that, but I'm not sure if there was an initial motivation behind it, except in the fact. Anything else? I have the articulate highlights and words. Do you have a recommendation of such a small book? Well, I mean, I have a book proofs from the book, right? I mean, that's a great book. It has things other than commentaries, right? But it has a collection of just very pretty results of incredibly pretty proofs. So I looked that up. I think you can find a free at least chapter by chapter copy on the author's website if you forget who it's by. But if you look that up, I know there's like a free copy of that online that you can find or just torrent it. But yeah, as for other just little problems, I know, I don't know, I know Noga alone has written a couple of short kind of survey papers on pretty problems that just kind of come up in my little geometry or combinatorics things. I forget what those are called, but he has like a small series of short papers that are just like, here's an interesting problem. Here are some results. Here's another interesting problem. Here are some results. Great. But yeah, pretty problems. But you see, the moral of the story is discrete mapping is best. You should really check yourself. No, good. Yeah, that's what I'm saying is, uh, I have to get this Morris tomorrow. It's like just like any form of listening, usually during the semester. The number of those combinatorics that should be constant. Yeah, well, I would be less between the probability or I am really bad at names. Okay, so you're really just success should be higher than one hop with some, because some constant. I mean, yeah, sure. If you, I mean, no, I mean, if you get