 Then it's my pleasure to introduce the second speaker of the session, Jorge Villar. Thanks for the introduction. So I'm going to talk about these families of matrix problems that were introduced in crypto 2013. But in this case, we aim to enlarge the family of problems that can be modeled with this handshake framework. And just to introduce the family, I'll give a short overview of it. This is the now almost a standard notation for these kind of problems. We use additive notation because it is very convenient to perform algebraic operations. So we use this bracket. Where's the pointer? We use this bracket notation because it's very convenient for matrices and vectors. Also, we use the same notation for pairings. This subindex, t, means that this element, for instance, is the generator of the target group, rise to the x plus y power. Then to start introducing the problems, actually we thought about the generalization of the classical Diffie-Hellmann Decisional Problem and the Linear Problem. And then they can be included in a very wide family of problems that we call subspace membership problems. We start with a family of vector subspaces of fixed dimension k in a vector space of dimension l. Then in this family, we can pick just one of these subspaces at random and then... What's the wrong? Excuse me. Just a pointer. The question is that the traditional problem is turning apart a random vector in the column span of the sub-space from a random vector in the whole space. It's a subspace membership problem. Typically, this is implemented taking a matrix. Each subspace in this family is defined as the column span of a matrix, a full-rank matrix. And then you can, for instance, consider the DDH problem or the tooling problem specified in that way. So you can see that you can rewrite these problems as matrix problems. They're defining a matrix depending on parameters. For instance, DDH is defined by this one column matrix with this one and the only parameter is this a. But the tooling problem uses two parameters, a1, a2. You can generalize these two other problems. So you can see that we use the columns of the matrix as generators of the vector space. And then the vector z is either a random linear combination of the columns or a random vector in the space. Let us define what we call matrix distribution. That is just a collection of matrices depending on some parameters. So we call matrix distribution to a collection of matrices or a probability distribution among the matrices. All the matrices have the same size, L times k, with k less than L. And we consider that the matrices are sampled using a polynomial map, this polynomial map f of constant degree. I mean the degree does not depend on the security parameter. The vector restricts to the case L equal to k plus one. So the matrices are almost square because otherwise if L is greater than k plus one, we have fewer tools, algebraic tools to evaluate the security or the hardness of these problems. And also we focus on linear polynomials because for the applications we need this kind of restriction because then given a matrix a, we can extract the parameters a1, a1, a1, ad. And this is very convenient. Otherwise, again the tools to work with these problems with greater degrees are very limited. So these are the examples. I commented before about the Diffie-Hellman and two linear problems. So the general definition of this MDDH problem, the decision MDDH problem is this one. So we start with a matrix distribution, this a of td is a vector, a vector of say d parameters. And the problem is turning apart this random linear combination of the columns of a of t from a random vector. And then we study this is the work in crypto 2013. The hardness of this problem depends in the generic group model of the properties of this determinant polynomial. That is the determinant of the matrix with adding a new column of variables. So this polynomial is a polynomial in the parameters of the distribution of the matrices and the parameters corresponding to the last column. Then there are some no instances about of these MDDH problems, the uniform matrix and the linear but also the cascade and the symmetric cascade were considered. We observe that the number of parameters is very different although the size of the matrices is comparable in this case all the sizes are identical but we can range from a fully randomized matrix with a lot of parameters k times k plus 1 to a matrix with a single parameter. So it has a very compact representation because to specify the matrix you only have to store one group element. So there are lots of applications in these three years, lots of applications about different construction of different primitives and protocols were introduced. And the main question is that you can almost always replace the linear, the tooling assumption by an arbitrary MDDH assumption. So let's move to the main contribution of the talk that is trying to generalize this construction, this framework to a computational problem. So there are some problems related to indistinguishability problems like semantic security etc. But to capture the other notions like enforceability or soundness in generalist arguments or binding property of commitments typically we need computational problems, search problems and also typically with more than one possible solution. It's what we call flexible computational problems. Then we are generalizing this previously known framework of MDDH problems to the computational version, I mean the flexible problems. So we start again with the collection of sub-spaces of dimension r. I changed the dimension here, instead of k I use r. And what we do here is not a membership problem but a sampling problem. So the question is having some sub-spaces from which it's difficult to sample a vector. Because I'm forcing a signature or a rational proof means generating some element in a set and we model this as sampling a vector from a hidden sub-space. So typically we use the left kernel of a matrix A or the kernel of a transpose. So if the size of the matrix is L times k as before then now the dimension of the kernel is the difference assuming that A is full rank L minus k. So we define the kernel MDH problem in this way, we give a matrix distribution and from this matrix distribution the problem is finding a non-zero vector of the kernel of a transpose. Then we can start with some examples, for instance the DDH and the two linear distributions. They define kernel problems but the first problem is trivial because we can always find our orthogonal vector of the vector 1A just by exchanging the two components and changing the design of one of them. But this is not easy to do with the two linear kernels because we have to find a vector that is orthogonal to these two columns simultaneously and for instance if we try to find explicitly this kernel we can see that with group elements we cannot complete the vector x1, x2, x3 in a non-trivial way because we need to compute products of elements that mean solving the computational Diffie-Hellman problem. So this is why these problems are considered hard and are useful except for the case k equals to 1, in the case of k equals to 1 the problem becomes trivial. So are there other examples? Yes, the first result is that if the MDDH problem is hard, the decision problem is hard, then the kernel Diffie-Hellman problem is also hard. It is what one expects when we define a computational problem related to a decision problem. If the decision problem is hard then the corresponding computational problem is also hard. So every example of MDDH problem gives us another example of the kernel Diffie-Hellman problem. This is the proof but I think it's quite straightforward. If finding an orthogonal vector in the left kernel of A means that one can just tell apart what we call the real distribution in the MDDH problem from the random one because computing the pairing of the solution X of the kernel MDH problem with a challenge vector, this pairing gives the trivial element always if we are in the real distribution or it gives a random element which is the neutral element of the target group with negligible probability. So we have all these families and the question now is, there are actually some problems used in the literature that are indeed inside this wide family of computational problems. Ranging from this fine red problem to the one flexible square Diffie-Hellman problem. And the last applications appear in the papers are the homomorphic signatures, quasi-adaptive non-interactificial knowledge proofs, capital commitments and structure preserving signatures. Some of these applications, I will introduce this idea two years ago but people were referencing the paper in Ibrint but we are now presenting these results which in part are quite old results because it corresponds to two years ago but the hardness results are presented in this talk. One of the main reasons why these problems are useful is because they can be used to build a compiler from a secret scenario into a public key one. In this case, for instance, if we start with designated verifier proof of membership of a vector X to a sub-space, the column, the sub-space, the column span of M, then we can hide the public parameters, sorry, we can hide the secret matrix K in this proof, in this membership proof, in using a NBDH problem. So the question is that if we embed the secret key in this kind of ciphertext, then the verification equation of the designated verifier proof can be performed without the knowledge of K of the secret key but only we need the knowledge of M and this ciphertext. So the question is that it is not equivalent because the designated verifier proof is perfectly binding but in this case we only have computational binding. The key problem is that this equation is equivalent to this one and then finding a non-trivial solution of that equation that is really a bad solution because it doesn't imply the first protocol means finding a non-trivial vector in the kernel so it means that we are solving the kernel-difficult problem. Then moving to the hardness of these problems, we can study the generic hardness and we give the following results. So if the decisional problem is hard, the generic hard is what we call that the matrix distribution is hard, the matrix distribution hard means that the decisional problem is hard, then we have shown that the corresponding kernel problem is also generically hard. Also what we call algebraic reduction that means that if we start with a matrix distribution A, we can define another matrix distribution B just by multiplying by invertible matrices from the left and the right, these L and R matrices. And then this gives a natural reduction between kernel problems. So we can modify the matrix distribution just by multiplying from the left and from the right by random matrices. And also the main result is that we show that a sequence of matrix distributions with increasing size defines an increasing hardness family of kernel assumptions. Then we show that larger problems imply, I mean larger problems are harder than smaller problems and actually they are strictly harder. The first reduction results are quite straightforward because we give explicit reductions. The problem is that proving this strictly increasing hardness requires black box separations. Then you can see in this diagram what happened with the decisional problems in the upper part. For the computational problems we have the three results that the decisional problem imply the computational problems. The smallest problems are easier than the larger problems but they are strictly easier. Now let's focus on the red part. The problem of black box reduction between flexible problems is that a black box reduction must work for all possible oracles solving the problem P2. If we want to reduce P1 to the problem P2, the black box needs to work for all possible oracles solving P2. Then finding a separation result means that any reduction fails for some oracle solving P2. The problem is that if P2 is a flexible problem, the reduction has to be able to extract useful information for any possible solution of the flexible problem. This is something that is strange and then makes the proofs a bit harder. We also need to impose extra requirements to obtain meaningful results. We need to assume that the reduction only makes a constant number of queries to the P2 oracle. The main result is that using the generic multilinear group model, we can show that the black box reduction has this structure. We split the reduction into two parts. The first part corresponds to the first Q-1 queries and we separate the last query. The question is that this last block, the last block R1 that is the force processing stage can be modeled by this very simple equation where N is a linear map that only depends on the randomness of the reduction. It cannot depend on A, on the problem instance that the reduction is trying to solve. Also, the image of this linear map has a dimension that is strictly smaller than K because we are trying to reduce from a problem of size K to a problem of size K tilde that is strictly less than K. This is crucial and we also use a geometric property of this hard matrix distribution, that is what we call elusiveness. That means that if we start with this collection of vector spaces that define a matrix problem, then we say that it is elusive if there is no K-dimensional subspace that intersects non-negligibly the collection of subspaces. I mean, the subspaces belong to a family of subspaces and we pick one of these subspaces at random. This means that with these kinds of distributions, if you fix a K-dimensional subspace F, the other subspaces only have a non-trivial intersection with an eligible probability. We show that all hard matrix distributions have this property and then we can prove that the last oracle call is useless. In this diagram, what we prove is that essentially the reduction doesn't need to compute V because U is an OTH. Actually, we prove that either U solves the original problem instance or the image of this W. But by the K-elusiveness, we can just say that this second case is not possible and then U needs to be a solution of the original problem. So if we can reduce one of the queries, we can use this argument iteratively and then show that the reduction should be able to solve the problem instance without any oracle query. That means that the problem was easy. So if we try to find a black box reduction between two hard problems, then this kind of reduction doesn't exist. Because if it existed, then the original problem was easy. This is the black box separation. Another result is that we always have a problem with the case L greater than K plus 1. I mean when the matrices in the matrix distribution are taller. So we have serious problems with the algebraic part, especially to prove the hardness even in the generic group model. But we managed to introduce a new family of assumptions in which we have the parameters when the matrix has the extra rows. So I mean we make a taller matrix, but we need these t-parameters. So we show that this family has optimal representation size. I mean this is a minimum number of parameters. We need at least t-parameters, which is the difference between the rows and the number of columns. And also we can show by hand that this defined generically secure MDDH problems. I mean this MDDH problem is hard in the K linear generic group model. So just commenting a bit about this contribution because it seems that if you can just take a matrix distribution and then apply the generic group model, the problem is that we are dealing with a family of problems, not a single problem. And then we needed to use more advanced tools to be able to prove this security result. Even we have to compute a groupler basis at hand and it's non-trivial. I mean we cannot expect to have other families that can be easily shown to be hard distributions. And that's all. Thank you. Thank you very much. Is there any questions for Jorge? I think time for one short question or so. So when the matrix is longer, you gave one example where you had t-parameters and all the values were equal. What is likely to happen in the other cases where you have more parameters? Do you have any examples where it is easy or you just don't know? Yeah, yeah. These are a natural, simple example that is all completely randomized in matrix. The problem is that it requires a lot of public information to store the matrix. So if you apply this kind of matrix distribution for a ketography protocol, perhaps you need to store this matrix as part of the public key. So instead of using k times k plus d group elements, we can use just d elements. So it's inconvenient. It's about shortness of the components. I understand it's inconvenient, but is it insecure if you have all the parameters? I mean the completely randomized matrix is more secure than the others. So you have to balance between security and compactness, but all are secure in the general group. Okay. Thank you very much. Please join me in thanking Jorge again.