 Thank you. So today I want to talk about the less full degree H of V and while descent attacks on ECDOP. And I want to focus mostly on the ECDOP part. And then I want to explain to you why we also studied H of V and less full degree. So let me first start with the definition of elliptic curve discriminating problem. So this K be a finite field. And considering if the curve over K defined by an equation of the form y squared plus a1 xy, et cetera, where a1 up to a6 without a5 are constantly in the field K, then the set EK of K rational points of this curve. So that's the solutions of the equation where x and y lie in the field K. They form a finite two billion group if you also add the point in infinity with identity element infinity. So the group EK can be used in cryptography. And there's this whole field called elliptic curve cryptography. And this field relies on one assumption, namely that the elliptic curve discrete logarithm problem is hard. So what is this problem? Let P be a point in the group and Q be a point, which is a multiple, which is in the group generated by P. Then the question is, can you find the integer C such that Q is equal to C times P? So this is the elliptic curve discrete logarithm problem. And there exists many algorithms to solve this problem. So first of all, we have algorithms which are generic. They work for any finite ability in groups such as exhaustive search, baby type giant step, Poirot's row method and many other methods. The second class of methods, they work only on specific type of curves, namely super singular curves or anomalous curves. And today I want to focus on a third type of groups, namely third type of elliptic curves, namely, once defined over FQ3N. And in this case, you can use wildly centered text. This was first developed by Semiah, Gaudry, Deem and many other people. So let me explain to you how you do, how you apply such a method. So remind you that E over K is an elliptic curve, P is a point in the group and Q is a multiple of P. So there are three steps which are the basis of an index calculus. So first you fix an integer M and Z at least two. In the second step, you define a subset of the points of the elliptic curve. This is called a factor base. Then you try to, then you pick A and B and Z random and you try to write AP plus BQ as a sum of M points in your factor base. And you repeat this many, many times. So you obtain a lot of relations. And in the third step, you try to eliminate basically these BIs from your relations and using linear algebra and then you find a relation between Q and P. So as you can see, the third step is sort of a standard step. It's just linear algebra. And the hardest step in this case will be the second step. How do you find these relations? So in practice, it's not trivial which M you should pick or which factor base you should pick and how you find such relations. So now let me restrict to a more specific case. So assume that we work with a field K which is FQ to the N. Then you can consider sort of natural factor bases such as you let phi be a sub-vector space of the field K. And you consider the points in the curve which have X coordinate lying in the subspace. This is what you can use a vector space and to optimize the complexity of your algorithms, quite often you pick M which was in the previous slide, just M such that N prime times M is equal to N approximately. And now the question is how you decompose such a AP plus BQ as a sum of N points in your factor base. And there's an algebraic way of doing this. So there's this theorem of semi-F which basically says that there exists some polynomial which allows you to test if a relation exists. So in other words, you can write down a system, a polynomial system in say M variables involving an M plus one summation polynomial and some other extra polynomials. And if you can solve this system, you can find the zeros of the system, then you can obtain relations basically. So using the semi-assumption polynomials, you can turn this problem into solving a polynomial system. And there are many tools for solving polynomial systems such as Gripner basis algorithms. But if you try to put this system in a computer, you see that you have a couple of problems. The only one problem is that the polynomial F I describe here which is made such that I only find solutions which lie in the factor base, in the factor base, has a very high degree, namely the size of the vector space. And the second problem is that this S M plus one has a very high degree and it's a polynomial which is very hard to compute. So it's very hard to give this to a computer. So let's first solve this problem. So one can solve this problem in as follows. So there's something called Wiley-Sand. So instead of working over the system over FQ to the end, you work over system, you work over FQ, you introduce some more variables. I will not explain to you how you do this, but as a consequence, the F X I polynomials, they become linear polynomials. So they become a very low degree, although the number of variables does increase. The second trick to help you is the splitting trick which is invented by many people this year. And instead of considering the system F, which involves S M plus one, you can split up this polynomial, you introduce more variables, and then you get a system which only involves S trees. And S tree is a polynomial of degree, I think six. So it's a very low degree polynomial. And if you combine both tricks here, you obtain a system F double prime, which has a low degree, it has a lot of variables, but it's a low degree, you can just write it down. And if you can solve this essentially, you can solve the elliptic curve discrete logarithm problem. Okay, so now let us restrict a bit more. So from now on, we'll work over F two to the N, which because most people did this. And what most people did to solve the previous system, say this F double prime system, they use Gripner-based algorithms. And these algorithms are very hard to estimate the complexity of. And what people, the main parameter for complexity is called the degree of regularity, which is the largest degree you see in a Gripner-based computation. And because it's a very hard parameter to estimate, they came up with a heuristic called the first-fold degree heuristic. So what is this heuristic? If I start with a system G consisting of polynomials G1 up to Gt, then I can look at expressions of the form sum HIGI where HI are polynomials. And if you look at this expression, you expect the degree of this polynomial to be equal to the maximum of the degree of the HIGI. But sometimes because of cancellation, the degree will actually be lower than expected. And the first time you get an unexpectedly low degree, it's called the first-fold degree. And there's a conjecture by various people that the degree of a system F prime and F double prime is very close to the first-fold degree, okay? So the conjecture's degree of regularity, which determines the complexity of Gripner-based algorithm is very close to the first-fold degree. And the idea is that the first-fold degree is something you can easily estimate, you can easily give upper bounds for the first-fold degree. And if you combine these two things, the conjecture and effect that you can find very, you can find upper bounds easily, you obtain sub-explanational algorithms for ECDOP, okay? So actually the main message of the article is that we do not believe that these sub-explanational results are valid. So we have two reasons for this. So first of all, if you look at this first-fold degree conjecture, if you would apply a similar conjecture to a very similar system, you can actually show that you can solve ECDOP in polynomial time. And this is something which seems to be a bit too much to ask. And also you can prove P is equal to NP, which is some statement which is also too strong. The second reason is the following. We did a lot of experiments. And this is basically a specific case where F prime and F double prime are equal to two. This is the case where we have a system consisting of one summation polynomial of degree in two variables of degree four. It's a very low degree polynomial. You do wild descent. You get a system with, well, N polynomials. And then the previous people, the people we first did a computations. They mathematically upper bound in the first-fold degree by four in all cases. And they compute a degree of regularity and you see three, three, four, four, four, four, four. And from these results, they thought first-fold degree, degree of regularity are very close, hence we pose this conjecture. And in fact, if you do a bit more computations, you see the following. The first-fold degree is not equal to four, at most four. It's always equal to two. It's much smaller. And secondly, this conjecture here, there was a conjecture four. It turns out to be five in this case. But to really verify this, that it's five you need a computer with, for example, 120 gigabytes of RAM to do the computation. So in this case, you see that there's a large gap between a degree of regularity and the first-fold degree, hence we do not believe that such conjecture is valid for easy DOP systems. So there's another column here about random systems. So if you consider a system which is very similar with the same number, variable, same number of equations, type of equations, you see this degree of regularity is much smaller than the ones we obtained for this specific system. So hence it still remains an important question to see what is the growth of this degree of regularity. If it's quite low, you can still obtain very good algorithms for easy DOP. Okay. So let me also explain to you where this first-fold degree two comes from. Surprisingly enough, if you have an elliptic curve over fielding characters two, which is ordinary, then there exists something called a trace map. So there's a subjective morphism to F2, which factors through taking X coordinate. And if you work with this map a bit, you can show that this causes the first-fold degree to be equal to two. Okay. So as I mentioned before, we do not know how fast the degree of regularity of such systems grows, and it's a very important open problem. So if it's really slow, you have still good algorithms, but we just don't know how good they are. The second question is the following. So I try to convince you that our research on HIV is something useful. So in our previous table, you saw that the degree of regularity grows, but there do exist systems for which this is not the case, where this degree of regularity would remain constant. So our question is, why would this happen? What is the difference between this system and the systems for which this happens? So an HIV hidden field equation is one of such systems. So we try to study HIV. So let me explain to you what HIV is. So we set K to be a field of Q3N elements, and we consider a polynomial F in FQ3NX. We let F prime be the wild descent system of F. So this is a system of N polynomials in this polynomial ring in N variables. And you can use, you can perturb this system by doing some linear change of coordinate, linear change of polynomials to basically hide the fact that your system is the wild descent of one polynomial. And this is a sort of trapdoor and you can use it in cryptosystems. So this is invented by Paterian 1996. So the idea is it's easy to find the zeros of F, but it's hard to find the zeros of G if you don't know the transformation to get it back to one single polynomial. Okay, so this system was proposed, but a couple of years later, people were able to crack this system or at least find algorithms which efficiently can solve it in certain cases. So the result by Forgerian 2002 is that the degree of regularity of the system G, the perturbed system, does depend on Q to the degree of F, but it does not depend on parameter N and actually in practice, it's quite a low number. Furthermore, in this case, it looks like the degree of regularity is very close to the first full degree. So here you see a system which essentially is quite easy to solve. So there were a couple of people who tried to prove this statement. So there was first approved by Hodges and Ding in 2011 where they gave a proof under the first full degree assumption, but this is something we doubt so we do not really trust this proof. And there's a second claim for proof by Petit in 2013 and we tried to give a completely different proof. So our goal is to prove complexity by solving HFV just to gain a better understanding of the situation. Why is it different than elliptic curve system? So let me now come to the last full degree. So this is a definition which is not that easy. So we have a field K and we have a system F in M variables which generates an ideal I. Then basically we can make a sort of black box group based algorithm. What does it do? It constructs a series of vector spaces, V zero, V one, V two, et cetera, where V I is contained in the ideal but also in the polynomials of degree at most I. With the following property, V I is the smallest K vector space such that the polynomials in the starting set F with degree at most I lighter in there. And the second property is that it's close under ideal operations of degree at most I, namely that if G is a polynomial in V I, and if H is any other polynomial with degree of H times G less than or equal to I, then the product lies in V I. So this is, you can see this is a black box group, black box group based algorithm. And now we have another definition in the last full degree. So if you look at a polynomial in your ideal I, the first time you can see this polynomial will be in V the degree of the polynomial because V I lives in the set of polynomials of degree at most I. But in practice you have to go a bit higher sometimes. And the first D such that for all polynomials in the ideal we have this H lies in V maximum of D and degree of H. It's called the last full degree of the system. Okay, and it's a noted by D of F. So what are the properties of this last full degree? Well, first of all, it's actually an integer. Secondly, it does not depend on any monomial order. So one of the problems with grouping based computations is that it's, it depends on monomial order. And this makes, this is a choice and this makes computer, it makes proofing theorems much harder. You have, you made a choice, it's much harder. And another point is that it is bounded by the degree of a regularity of the system. Also, there's another equivalent definition which looks very similar to the first full degree definition, hence the name last full degree. And finally, another important property is that if you know the, an upper bound solution of the system and you know the last full degree, then you can easily solve the whole system by computing V maximum of degree of F and E and use monovariate factoring algorithms. Okay, so let me give you the theorem. So remind you that K is a field of Q3N elements. F is a polynomial in this field and we consider the H of V system G. Then we can show the following, namely, assume that F has the most E different solutions in the field K. Set D equal to the maximum of, well, this number, it consists of Q, the number of solutions and basically two Q times the log of the degree of the polynomial. Then one can deterministically find all solutions to the system in time polynomial in M plus D to the power D. So what we do in the proof essentially is we give an upper, this is the upper bound for the, well, without the E, it's an upper bound for the degree of regularity of the, the upper bound for the last full degree of the system. So what you see in this statement is the following. You see the dependence on N is here, but not an exponent and this really gives you quite a good behavior. So this D doesn't depend on N. And secondly, this value we obtain here is quite close to what you would get if you do the experiment. So it's not very far from the truth you would see if you do the thing. So the idea of the proof is the following. We upper bound the last full degree of the system using some sparse GCD algorithm. So basically if you have one polynomial F and you want to solve it in a field K, the obvious thing you need to calculate is the GCD of the field equation for K and the polynomial F. And this is something you can essentially do after a while descent using a sparse GCD algorithm. Okay, so what is the difference between the system coming from ECDOP and the system coming from HFE? It's actually the following. The HFE system is a zero-dimensional system. It has finitely managed solutions over the algebraic closure. Whereas essentially the ECDOP system, if you forget about the subspace constraints, is not zero-dimensional. It has infinitely managed solutions. And this is, according to us, the crucial difference between the two cases. Okay. So let me give the final slide. So the conclusion is the following. We raised out on the first four degree assumption for systems which are not zero-dimensional. Actually, in a follow-up article, we proved the similar results for any zero-dimensional system. And in particular, we doubt the recent sub-experimental time complexity results for solving ECDOP using summation polynomials and gripping-based algorithms. In particular, we do believe that ECDOP over F to the N where N is prime is still safe. So, and the second part is that we gave a new method of solving systems and a new method of upper bounding complexity of solving systems using this less full degree and we were able to give mathematical a complexity balance on solving HFV. Thank you for your attention.