 Thank you Alina, thank you Alina, thanks for inviting me. So you're starting to speak about points, lines, and pre-nominal identities. It's a team that connects combinatorics with algebra, algebraic geometry, and important problems in theoretical computer science. Okay, so let me start. So, okay, yeah. So what I'm going to speak about is the, about the main problem is going to be the Sylvester-Galai theorem and some relatives of this theorem. I'll speak about applications. So usually I speak about two applications, locally co-optable codes and algebraic identity testing. I think that now we have time to speak about the second, but if there was time without questions, I'd be happy to also mention locally co-optable codes. So these are the applications of the Sylvester-Galai theorem, and then I speak about higher degree analogs, new results, and give a sketch of the proof. Okay, so point line incidences. This is a main team. It's an important sub-field in discrete geometry and combinatorics. Basically the questions look like the following. They are given collections of points and lines, satisfying certain properties. And then we have to bound some combinatorial measure related to these inventions. For example, the number of point line incidences, or the number of finds, satisfying certain properties, the number of points, et cetera. And there are many famous results and conjectures related to this thing, like the Samaritan theorem, the Gut-Katz theorem, the Kakeha problem, et cetera, et cetera. So this is a very active area with many beautiful open problems and non-curements. In this talk I'll speak about the Sylvester-Galai theorem and some closely related questions and problems. Okay, so the Sylvester-Galai theorem is an interesting history. It was conjectured by Sylvester in 1893, then independently by Erdisch in 1943. This was done as a cure in 1941 and then by Galli in 1934. And then it's called the Sylvester-Galai theorem. Yep. So this is the following. We're giving a set of points in an R-screw, and finding those points. So we're going to satisfy the following interesting property that any line, or any two points in the set needs a third point. So for example, take this line, passes through two points, and it's a set in a third point. This line has the same property, this line has the same property, et cetera. So if all the lines satisfy this property, then the theorem says that the points must be collinear. Okay, certainly a collinear, so four points satisfies this property, but this is even on this. For example, there's a line that passes through exactly two points. Okay, so that's the Sylvester-Galai theorem. Any finite collection of points, such that any line passes through two of them meets the third and third point, must be collinear. So let me show you the proof because it's very short and very elegant. So let's consider the point in the line, where the line passes through at least two points from the set. And the point in the line, so that the distance between them is the smallest. So here's the line, here's the point, and let's say that the distance is d. Now by the assumption on the set, there must be three points in the line. So at least two of them must be on the one side of the meeting point of this perpendicular to the line. And now, if you just look at this line, then you get a smaller distance in contradiction to the way that you chose p in the line. So that's the proof. It was open for 40 years. And the first proof of energy was, I mean, users used heavy tools, but this is a really beautiful and elegant tool that probably most of you have seen already. And just notice a few things. First of all, it's important that p is finite, because otherwise you could just take the entire plane. And the second proof is that it's not, it can be, it's, it also has a higher dimensions, but you can just project your points to the machines. Right. And we also use the wheels, right, use distances, which is property of the wheels. It's not, it doesn't talk about the finite fields and indeed it's not one of the finite fields. And also the proof is not true of the complex numbers. So it's a special based set in 66 and then put the killing 86 that if you take the same function with any complex line 22 points, it's a set in a third point. In this case, the dimension of the affliction of the points can be two. It's not only one, it can be two as so it's like the, the C square, so also it's a C square. Another important version that is related to what I'm going to speak about later is the colorful version of the theorem, which is due to audition Kelly that for us will only focus on the following statement. So let us assume that our set P is composed of three different colors, three different sets, one is the red set, then we have a green set and the blue sets. And now the property is that every normal automatic line must contain all three colors. So, for example, if you pick a red point and the blue point and take the line that connects them, then this line also meets the green set in some point. Then the conclusion is that the dimension of that is fine is the same. And also the complex numbers actually use one more dimension for that to write. And another version of this theorem that you're going to need, or use is the following robust version. So up to now we assume that every line to any two points satisfies some property. And the decade ago, Baragas, Yiddish, and Yudairos, later we often based on the result, proved the following robust version that if you pick any point in the set and you look at the special lines, special lines of the lines that contain three points from the set, then this time cover some constant function of the set, like 10% of the set. So it's no longer that any line that passes through two points meets the set in the third point, but like many lines. Then the conclusion is that the dimension of this time is the most of one of the delta, it's a constant over delta. And this is the start because right you can take, you can partition the set to, if we have endpoints, you can take another delta endpoints in one of the delta in different directions, and this will start, it's not how to show that this will satisfy the property. So up to a constant factor, this result is start, which is very nice. And actually, as it turns out, using the same tools as they used to prove this result, you can re-prove a Kelly's theorem of a complex. Okay, so these are three different versions of the Sylvester-Gray theorem, some extensions of it. Now I want to connect it to problems on polynomials, so let me take an algebraic view of this theorem. It can also be viewed as a dual resizing theorem. Okay, so in this setting, we have a finite set of homogeneous linear equations over the base. We have n variables and n linear equations, homogeneous linear equations, and we satisfy the following property that whenever we take a common zero to two of the linear equations, then they are also zero of some third equation. Okay, so any n-tapel that makes two of the linear equations vanish or be satisfied also makes the third equation equal to zero. And in this case, you can show that the dimension of the function of the sum of the linear functions is it most true, or over the complex standard, you lose one more indication. Okay, so this is not about points and points, but linear equations and a set of zeros, about hyperspaces. And this gives you the one line proofs, basically. So if you consider a linear equation L, which is the form of v in a product with the variable vector x, then you can associate with L the span of v, the one dimensional line proofing. And now pk-I can play in general position, and for each linear equation L, associate the point of the intersection of the span of v with this I can play. It's difficult to prove that if a line, if a linear question is in the span of two other linear equations, then those three points must be collinear. And this condition about the finishing of course implies that for any two linear functions, some third equation is in the linear standard. Okay, so this is a one dimensional analog of the Sylvester-Grelai theorem, and I'm going to speak about higher degree analogs during this talk. Okay. So let me now show you why this problem, how it connects to problems in theoretical computer science and how I got interested in it. Let me show you about, I mean, I speak about higher degree analogs and the relevant to polynomial identity testing of general or more general. Okay, so I don't expect you to know this topic, I'll define everything in explain it. Okay, so a big thing in computer science is the question of program testing. And in this talk, you'll only consider algebraic problems. So what an algebraic problem, just imagine that you get, you have a set of arguments, and you're allowed to use additions, multiplications, and also use a scalar from some some underlying field F. Now somebody asked you to compute some algebraic expression. For example, consider the polynomial x square minus y square. So you give this problem to your students and they come up with some implementation. And they give you some other program that they claim computes what you wanted them to compute. Now the question is, how do you verify it? How do you make sure that these three questions are equal? Right, and this is a very general problem. I think about chips designed by Intel, they're supposed to do multiplication of large numbers. How do you verify that they work correctly? Right, so this is a very important source for the problem. But it's not just a problem about programming tests, right? I mean, in mathematics, you have many algebraic identities. For example, this determinant of the Vandermonde, right, it's like this product. And so often we verify such identities efficiently. Well, in the case of this determinant, of course, we have a proof, but there are many general identities, many congested identities that we don't know how to prove. And the problem is that if we try to expand both sides, then they have exponentially many monomials. So we don't have an efficient way of checking that everything cancels out or that the two expressions are equal, right? It takes too much time. Even if you have only 50 variables, then you can have two to the 50 many monomials. It's like a huge number, right? And the number of such identities and the general problem that I'm thinking about and that interests me is the following. We are given an algebraic computation, so by that they mean it has to be an efficient computation. It's like something like that, like the product's x i minus x j, right? Something that you can present in a short and succinct way. And you have to determine whether it computes the zero polynomial. Or, as differently, you can have two expressions. F and G works, given by efficient computation, and you have to check whether the computer is the same polynomial. So if you take the difference, then it's the question about being the same, okay? So this is the general question of polynomial identity testing while checking algebraic identities. And of course, I think it's a very natural expression. It has many applications in computer science, but let me first define it properly. So the model of computation that we are talking about is called algebraic circuit. So an algebraic circuit, you can think about it as a cyclic graph, a cyclic directive graph. Okay, so we have inputs. Inputs are vertices of in degrees zero. And the inputs are labeled by either variables or scalars in the field. Then you have the internal gates. The internal gates are labeled with arithmetic operations, multiplication and additions. Each gate computes a polynomial in the very natural way, right? For example, this gate computes x1 times x2. The plus gates here computes x2 plus 1, etc. So each gate computes a polynomial. So the challenge is that given such a circuit, so not think about it as having many, many variables, like 500 variables and n variables, n is large. Think about n asymptotically going to infinity. And the computation is also a polynomial in n, like size n squared. So it has many monomials, it's a complicated expression, and you have to decide where it computes the zero polynomial. Okay, so that's the basic polynomial identity testing question. You can see that it immediately relates to checking algebraic identities, right? Just efficient, or succinctly represented. So, there's a very simple randomized algorithm, right? Of course, it's way back, but in the computer science literature, it's usually attributed to the middle lepton, Zip and Schwarz, but of course it was done for many years before that. And I would simply just evaluate your circuit at a random point, right? Because the very, very simple fact is that if the polynomial is not zero, then it's not zero with type probability, right? It's like 0 to 0 as measure 0. Okay, but what you're asking is not for randomized algorithm, we want a proof, right? We want to know the identities, okay? We want a deterministic algorithm, not an algorithm that outputs the correct answer with type probability, but that always outputs the correct answer. And this is, I mean, okay, I think that this is a very natural problem on its own, but also motivated by other problems in computer science or mathematics. So, for example, the celebrated primality testing algorithm of Agroalcana and Secceno works by solving a certain polynomial identity deterministic. From the computer science perspective does not go into it. The problem of deciding whether a graph is a perfect matching. So we have efficient algorithms for that problem, but the only parallel algorithm, I mean, an algorithm that can make many steps at the same time, but you ask how many sequential steps. The best parallel algorithm here is randomized, and if you could solve the polynomial identity testing problem efficiently, then you would get a deterministic parallel algorithm. But perhaps the most important application is that the P80 problem, the polynomial identity testing problem is ultimately connected to the algebraic version of the P versus NP problem. And it's not exactly what's written here, but basically, or normally, solving P80 is almost equivalent to deciding whether P equals to NP in the algebraic world. Okay, so you're not defined P or no NP in the algebraic world, but there's these are well-designed classes and that's a very important problem. And if you want me, I can explain it to you. And this collection was shown by cobalancing the parallel in 2000. Okay, so I hope that right now the problem statements clear, but we have been on Dubai computation, we have to celebrate computer polynomial, and we have to want to do it deterministically and efficiently. And I hope that I also convinced you that this is a very natural and interesting problem, but on its own, but it's also important for other reasons. So I think it's a good time for questions. Okay, so let's continue. So let me consider a very special example of polynomial identity testing. I'm going to look at the very restricted class of computation that's called Sigma Pi Sigma. So this is Sigma Pi Sigma sensor, we're going to take sounds, this is Sigma of products of sounds. So this less sound computes with a linear function, then you have products of your functions, then sums of products of your functions. So it's up and consider the following polynomial. So this is a product, I run from one to the of certain, you know, you know, polynomials, another product, another product, so we have like three multiplication gates. This is the three here. And yes, does this start equal zero. So, and you know, omega is like, it would. Okay, so. It's not how to, it's not so easy to see perhaps but like, but this is a good example and if you do the following change of variables, and you play with it a little bit, then this simplifies to the following simple expression, which is easily seem to be zero. But they just, just compare these two expressions, this question looks somewhat mysterious. This is a very simple expression, and they rise to a very simple change of values. Okay, but of course, this is some example that they prepared at home. So the general question is the following. I'm giving you such an expression like a high degree expression that has linear functions in environments. It has to do with computer zero polynomial. Now, if you try to expand the expression, when you have n variables, then roughly you're going to get something like into the D many monomials, and if these larger this is going to be an exponential number and you're not going to be able to handle it efficiently. So is there a better way of checking whether such identities are zero. So that's a one more thing. That's the first question of this industry like you and came to play the drama that he testing the setting. Okay. So let me show this connection. So let's look at the job question we have a PSA is a part of linear functions be the part of your functions to see the part of your functions. And you have to decide with the A plus B plus C equals zero, right. It's exactly an expression like we had before, but now we can in general fall. Notice, this is the first one. Okay, because if you had only eight, whether if you had to send out a plus B zero then by unique factor, this is pretty easy. You just have to check there's a matching between the real functions in a and B and the constants. They agree. And basically that's it. So a plus B plus C is the first one. And it's already a bit more trivial. So in a joint work that we will show that if the sets of this joint for no linear function appears in two sets. And notice that you can make this assumption that also generality because if the sum is zero, you know function to work in a and B then must also appear in C so you can divide by this linear function. So if you set for this joint and be satisfied that there's some zero. So again, if you look at the span of all the linear functions all three sets. It is bounded by some absolute constant. So this absolute constant is independent of the number of variables and the degree of the circle. Okay. So yes, so that's an interesting statement whenever you have a product three product linear functions that's up to zero of arbitrary degree in arbitrary many variables. The integration is not so different from the example that we showed here. We basically have like three variables and things have to cancel out some in a magic way. Okay, so that's the, I mean the table message of this theorem. And how do you prove it well it's very simple once you make the connection to this right you know to it's called some fire. Okay, let me first say that you get an algorithm for it. Well, if the dimensions constant, then you can just make a change of variable to get an expression in a few variables, and then expanding is not so cost. So just you do it in polynomial. Okay, so how do you prove this statement well pretty straightforward. So assuming that you assume that you have such an equation a plus equals you take any linear functions from me and then any linear functions will be and set them to zero. Then, certainly a ventures right because you set a to zero and be eventually because it's a big day to zero. So if there's some is zero then see must also vanish. This means that whenever AI and BJ vanish them. So, so there's some other polynomial from see other linear functions from see. And if you think about it for a second since they deal with time by a and b j's prime, right the sentence, I mean, they deal generated by a and which is fine. It is there's a unique or at least one secret that always ventures whenever. So, whenever a and b j. And this is exactly the situation of the colorful version of this. Right, we have like three sets three colors like a B and C whenever we took take two linear equations from two different color sets in seven to zero, some equation from the third set becomes zero. And now you can apply the advice version of the addition carry to her, which is the colorful version of this message like you want to conclude the dimensions. Okay, so I hope that this connection is clear and simple. Okay, the, and this approach actually extends when you have all the three summons or you have like any number of any quantity, constant number of summons. This was done by a and so. Why is it interesting so okay so I wanted to speak about the general polynomial that it is the question and I spoke only about the very restricted model of some of the products of your functions. But as it turns out this model is quite strong and in particular. I think the polynomial identity test for the for that research for such single piece of exponential degree for very, very high degree. Then this would imply an algorithm for the general case. Okay, I'm not going to speak about this reduction that's very interesting result that shows that this polynomial identity testing problem. Actually, you just have to solve it for very, very second classes computations. So I think that's already a very active in the property of this single maximum. Okay, so this was that we see what I see my law and for this we only needed to use the linear algebra, the degree one analog, the special like your own. So let's look at the more general model of the force which are now some products of some of the products of this. So for this computer polynomial with a few more minutes, but if you want the computation to be efficient though. Normally many gates in the circuit and this is going to be polynomial on environs on environs with only polynomial in many moments and we have products of such polynomial and some of them. So, let us look at the following specific question. Again, we have a A, B and C. Each A, I, B, I and C, I is the degree deep polynomial. And you have to decide where the sum is. Okay. So earlier we saw the case of linear functions so D was equal to one, now we're asking a more general question. And this is very interesting because here, I mean earlier in that three case, we had to study for exponential degree, in order to get results for general circuits. Now there's a term of the one and we need to show that it's enough to solve it for a pity for them for circuits to get pity for general sense we don't have to go to a special exponential degrees. Also pretty normal size that fall is enough. Okay, so that's a very important model. It's been heavily studied ever since this result. And it was conjectured by a big and it was Xena and by Gupta that. Okay, so it's a slightly different sort of conjectures and speak more about the conjectures later. That if you have such an expression from to zero, then if you look at the algebraic, not a linear standard, whether the algebraic rank of all the pre know it's not three sets, then it's again some absolute constant that only depends on the, not on the number of parabens know the number of multiplicants in each grade. Okay, so notice the degree of capital capital capital capital C, nor the number of times it's absolutely constant, it only depends on the. Okay, and what's the intuition of course the addition goes back to the degree. One case that we saw earlier. I'm just telling you that if you set two polynomials to zero. So a vanishes and even she's enough of CMOS Spanish. And for every common zero, and we change there is some cake that also finishes. So, the intuition says that we're just need some degree the analog of the edelstein Kelly to the, I mean the colorful version of the Sylvester like you. So, you know that this case is different than the linear by case because they did generated by a and b j does not have to be prime. And for example, consider the following simple example. So the polynomial a, which is x y plus Z times W and be which is x y minus Z times W. So, and, and not considered to see which is going to be a product of four linear polynomials. It's not hard to see that whenever a and b vanished and no x y was finished. Z times W was finished and therefore one of them was finished. But it doesn't always have to be the same CI right for different zeros of a and b the one to be different CI extension. It's not always going to be the same one. So it's a different problem. It's not, you have to chase the city. It's not always the same thing. Okay, so this makes this problem harder to solve. And this is just one aspect of this. This new difficulty. Okay. So, any questions so far. Okay. So let me speak about our results and this results. Okay. Okay, so the set of results speaks about the setting of quadratic polynomials. And it says the following. So if you satisfy that for every two polynomials does a polynomial that finishes whenever the first one vanish. The dimension of the span of the AI is against an absolute constant that is not dependent on the number of polynomials or the number of variables. Okay. And notice the conjecture spoke about a bike. You again get some linear bank. Okay. So this is also true for quadratic. And remember, we also wanted colorful versions of this theorem. So there's also a colorful version that the game is a partition resets our GMB you start to do never two polynomials from two colors that finish some polynomial from the third color sense. Then again, the dimension is constant. And the recently to the district works independent work. We put this, the first work to make students and there was another intent work by God, but it's a run. That prove the robust version. And again, what's the robust version that basically. Okay, so the following. Okay. So, so there was a set that for, let's consider any polynomial is that are them for a delta fraction of the polynomials in the set green. There's a polynomial in the flood blue subject to an AI and be a big defense and so does the blue polynomial ventures, etc. So it's noted for every AI, which is another for, for every AI and delta fraction of the other sets satisfies something. And you get, again, the dimensions model based on polynomial in one of them. Okay, and now another version that. Okay, so these two versions for more than for every a and a j, there is a unique a play in a different type of cheer will prove that if you have the property that whenever, whenever a and a j vanished, then the point of all the other dimensions. So it's not always the same polynomial right it's like the example that they showed earlier, then also in this case that they mentioned is bounded by some absolute constant independent of the number of violence or the number of three. So this is basically what you wanted to achieve for a quadratic polynomials, except that we don't have the colorful version in this case, which is what we need to get polynomial density. Consider the following setting. Now we have three set of three sets of quadratic polynomial and a b and c. And again in a different work we put that if we set some to zero, and they are destroyed again in terms of geometry, then the dimension is again constant. Okay, and this implies a polynomial time pt algorithm for the sums of what it is sums of degree two polynomial right so this two layers of some and product with quadratic polynomials. This is the sums of product of polynomial and we have two summons. Okay, so that was the first polynomial time I will go for a model that contain the premiumers of degree higher than one. Okay, so this is basically the results that I've been involved in in this line of research. And a couple of new results that are still not published so I'm not going to speak about it, but the generalized sums of the results here. So it's not mine, so let me not speak about this results. Okay, so let me just give maybe a few words about how we prove these results. So the first of the ingredients, the first of them is some edge by structure theorem that studies the ideas generated by two quadratic polynomials. And the students as the following. So I had the version of the paper and then you extended it to cheer to a module of setting, this is the following. So we have two polynomial q1 q2, and the set of some other polynomial product polynomial. And it's a property that every common zero of q1 q2 is zero of one of the PI right so whenever q1 q2 vanish then the product of PI also vanishes. Right this is like what happens in the, you know, many in the, in the, in the first in this example, or in this example we have called. Okay. Then the theory says the point of the following things that happen. Okay, either we found PI in this linear sense q1 q2. So certainly if such a big system whenever you want to manage them that PI functions as well, and therefore the product functions. Okay, so that's one thing. So another possible case is that there exists two linear functions, that's called an L and L prime, such that the product is in the spans q1 q2. Okay, I'll show an example of this case. And the third possible case is that the two linear functions such as q1 q2 zero model of them or in other words q1 q2 belong to the ideal generated by L and L. Okay, so it will says that whenever you have this case that whenever you want to vanish some product is going to vanish. Then one of these two cases must happen. So, if you think about it condition one is like the generic case, but if you q1 q2 stop, no, don't satisfy any relation among them. Then basically case one tells you that the only way that such a thing can happen is if something is in the linear. So other cases speak about possible structure that are possible relations in q1 q2 when you are very far from being in the generic case. Okay, so let me give you just examples showing that these three cases are different. So certainly the first case does not need any explanations. Let's look at an example for the second case. Let me let you want to be some product for me and q2 be q1 plus L times L. Now, just let you want to be different to polynomials, pick two other linear functions L1 and L2 and define q1 to be like q1 plus L times L1 and P2 to be q1 plus L prime times L2. Now observe whenever q1 q2 vanish then also this product vanishes L times L prime minus so either L vanishes or L prime vanishes. And then it's not how to see this immediately implies that either P1 vanishes or P2 vanishes, or in other words that P1 times P2 always vanishes. And again it's not how to see that P1 not P2 are in that they are not in the linear span of q1 and q2. Okay, so this case is different from the first case and it's possible. It's a possibility. Another example is the following. Let q1 be different. So here we have two linear functions such that q1 and q2 in their ideal. So let X and Y be those linear functions. So let q1 be X times A plus Y times B and q2 be X times C plus Y times. Now, if q1 and q2 vanish, then think about it. It's like it's like taking the matrix A, B, C, D times the vector X, Y, right? And q1 is the first coordinate and q2 is the second coordinate. So if this multiplication results in the zero vector then either the determinant is zero, or if the determinant is not zero then both X and Y must be zero and therefore the sum is zero. So that's another example. And again it's not how to see that this example is not captured by the first case nor by the second case. So it's a, again it's a third possibility that can happen. And what the theorem says basically these are the only possibilities for such a structure. So that for some product of the polynomial to be in the radical of the ideal generated by two other So that's a first rule. And so she and I gave an element, very quite elementary proof of this theorem and there are more sophisticated proofs using the tools from the geometry that gives you actually stronger results that classify when this ideal that generated by q2 is fine radical, etc. So if we look at the result result of q1 and q2 with respect to some variable, if you look at the factorization of this resultant, then basically the different cases correspond to different ways in which the resultant factorizes, whether it's irreducible, whether it is truly enough to produce irreducible quadratic factors, whether it is linear, more or less. Okay, so that's the idea of the, I will prove this theorem. So the second tool that we needed for the proof of this, of this third result of this colorful, of this colorful version, and also for this result was an analog of the robust version of this editing category. The color of this version of the Sylvester del Iq, and you needed a combinatorial, a robust combinatorial version that was not captured by earlier results. So just because this is a powerful version that if we have points and every monochromatic line passes that every non-monochromatic line. For this contains all three colors and the dimension of the difference. Then you need the following result for a fully robust version that if every point in one sense spends, you know, is contained in delta special lines, in delta fraction of special lines, then the dimension is small. So we were able to prove one of the cube down. We didn't try to improve it. This is probably not tight and I mean, probably the core boundaries on the delta, but for our needs, it was sufficient. But actually that's an interesting point, just improve it to one of the delta. I don't think it should be very good. I mean, I don't know how to do it, but I don't think it's a very difficult problem. Okay, so that's the second tool that we needed. And the way that we combine this tool, I'm just going to make very few words about it, because just the technical theorem. And basically you prove the following. We use a structured theorem that argue that either, well, think about any of these settings, right, that we either that argue. Okay, so we have a set of quadratic polynomials that satisfy this vanishing conditions. And what you showed is that if you consider the coefficient vectors of the polynomial, then either this coefficient vectors, if you think about them as vectors in R to the n square, roughly, then either these vectors satisfy the robust, you know, either Sylvester-Gelay or Ederson K2M, or if we don't satisfy this, then basically you can prove that each quadratic polynomial is actually depends on a few linear functions. So as a quadratic function, its rank is constant. And now, if you collect all these linear functions, or take a basis to which polynomial, so you get a collection of linear functions, then then we prove that these linear functions must satisfy the conditions of the robust, either Sylvester-Gelay or Ederson K2M. In either cases, all the polynomials, either they live in a constant dimensional space, or they depend on constantly many linear functions, which again imply the results. And the way you show it's quite technical, we have to do a lot of case analysis. The intuition is basically that by the structured theorem, if a polynomial q belongs to many special lines, right, if, you know, with many qi and qj, there is a qk, then q must have structure, right, if qi and qj spend a third polynomial, right, this was the first case in the structured theorem, or qi and q2 are very closely charged, right, either they spend a quantity of two linear functions or they belong to a two-dimensional idea. Right, so in either cases there, we get the q is very close to some other polynomials, or it depends on very few linear functions. And again, as I said, it's a lot of case analysis, because we have to study three different cases and, you know, break the set into which polynomial satisfy which case with a lot of other polynomials, that is a lot of case analysis. That's all I'm going to say about the proof. Okay, so I don't think it's a very informative, but that's a very, very high level what's going on in the proof. So let me end this talk by speaking about some open problems, and that should be a good opportunity to speak about the conjecture of Ankit Gupta. Ankit, so the way that I mentioned two conjectures, the first I became a midman's explainer and the other by Gupta that spoke about depth for identities. Actually Gupta made conjecture that just generalized the universal right given to polynomials of higher degrees, and this was the motivation for our work in action. So the first conjecture that Ankit made was the following. So there's a set of degree polynomials that satisfy the property that you already discussed that you never told them vanished and the point finishes. Then the conjecture was that that white rank must be some concept that only depends on the case of this conjecture. Okay. It's been solved, it's still not published, but that's a very interesting result. This speaks about the case that we have one set and the number two polymers in the sub finish the project with all the other polymers. Another conjecture is the following you have many sets of polynomials, right, just imagine now that we have case sets, F1, F2 up to SK, case sets of the greedy polynomials. The property that whenever you pick K minus one polynomial from K minus one different sets, then any common zero of them is also a zero of some polynomial in the case set. Okay, so that whenever, you know, you have polynomials from the first K minus one sets and take a common zero of them, then it's also zero of some other some polynomial case. And then in this case too, that's why it is bounded by some concept, depending only on the degree of the basic polynomials and the number of sets, but not on the number of variables, not on the side of the sets. Okay. And of course, if you solve this conjecture, then you get a polynomial polynomial time polynomial identity testing algorithms for a class of circuits that right now. We don't have any polynomial time class. Okay, but regardless, if you care about algorithms for polynomial identity testing or not, think that just as an algebraic question. It's a very interesting. So this is a conjecture related to that by such as such a bike of Sylvester-Gelay theorem, or this one is the colorful version. But let me speak about some other set of conjecture that is more, you can think about it as more being related to the combinatorial points of a design problem, which is the following. If you have a finite set of points in R squared, that is not contained in a conic, then there's a conic that contains exactly five points of A. Okay. So let's reference the Sylvester-Gelay theorem. If you have a finite set of points that is not contained in a line, then there's a line that contains exactly two points from this. So you need five points to define the conic, and you need two points to define the line. So this is the different generalization of the Sylvester-Gelay theorem. And here's an open problem. Let's say that we have a finite set of points in R squared that is not contained in a curve of the grid B. There's not a grid B curve that interpolates to all the points, that passes to all the points. Then the conjecture is that there's a curve of the grid B that contains exactly this many points. And again, you know, it's a trivial progression to show that this is the number of points that you need to, in order to specify such a curve. Okay, so this is a, this is not an algebraic conjecture. It's not relatively more identity testing, but again, I think it's a very elegant intriguing open problem that's related to everything that we spoke about today. Okay, so to conclude, we saw applications to not locally cognitive encodes, but to algebraic identities and program testing, and sound generalization to algebraic geometry question. It's also relevant to, you know, testing identities. And as I said, you know, many open problems, a lot is not known. And I think that's a beautiful status question that combines conectorics, algebra, algebraic geometry, computer science, and think it's just a beautiful topic. Okay, so I think I'm just in time. So thank you. I'll take questions if you have any.