 Hello, I'm Miguel Ambrona from NTT Laboratories. I'm a postdoc student there under the supervision of Masayuki Abe. And today I'm going to talk about my work on generic negation of parent coding schemes. Parent codings are a primitive that is used to build attribute-based encryption, so I'm going to first start by defining what attribute-based encryption is. In attribute-based encryption, there is a master authority. For an example, think of a university where there are students, there are professors and many users. And this authority can provide with secret keys to different parties. So for example, in this case, the university will give a secret key to the student who is a PhD student in mathematics. So the key will be associated to those values, which are called attributes. And the second student in this case, let's say she is a master student in mathematics. Now consider a party who has a file, some document, and that wants to share this document with some parties, but not with everybody. So she is in fact thinking of a policy. She wants to share this document with professors or PhD students in mathematics. So she can leverage this attribute-based encryption system to produce a ciphertext out of this file. And the ciphertext cannot be published, for example, in the university server. Everybody can see it, but only those who have a secret key which satisfies the policy will be able to decrypt. In this case, the first student will be able to decrypt, but the second student cannot. And in attribute-based encryption, we also want to forbid the collision. So for example, let's say that the second student, she has a friend who is a PhD student in chemistry. Even though these two keys individually have attributes which potentially could satisfy this policy, it should be impossible to combine both keys and decrypt. So more formally, an attribute-based encryption for a predicate p consists of four algorithms. The setup algorithm that produces a pair of keys. This algorithm is supposed to be run by the university in our previous example. The key generation algorithm which requires the master secret key, this is only known by the university. And on input an attribute y produces a secret key for y. Encryption, which can be run by everybody who knows the master public key. On input an attribute x produces a secret text for x. And a key, a symmetric key, which can be used as a key encapsulation mechanism. And then there is the decryption algorithm which on input a key and a secret text will produce the symmetric key, which is supposed to be the same one produced during encryption, if the predicate between x and y holds or bought otherwise. Attribute-based encryption was first conceived by Sahay and Waters in 2005. And it was later actually introduced by Goyal and others. Originally it was designed in the flavor of key policy attribute-based encryption, but there are other versions of it. For example Cyphertyx policy where policies are in Cyphertyx and keys are associated to attributes. That's the example that we saw with the university. But nowadays the notion of attribute-based encryption has been generalized and thanks to big effort by the community there exist efficient skins for a rich variety of predicates. For example we have a zero inner product encryption where x and y are both vectors and the decryption will hold if the inner product between these vectors is zero. Other examples are for example monotone access structures or hierarchical identity-based encryption, large universe attribute-based encryption where the number of attributes is exponential, maybe exponential number of existing attributes, polynomial size circuits, regular languages and there are many others. But however despite this great progress in the field designing better schemes in terms of size, performance, security and expressivity became really hard until two breakthrough constructions appeared in 2014. So these are the works by we and Atropatum which are two independent works that propose generic and unifying frameworks for designing attribute-based encryption schemes for different predicates. Both works define what is called an encoding and then follow the dual system methodology to construct a compiler that on input the encoding for certain predicate p produces a fully secure attribute-based encryption for that predicate. These frameworks remarkably simplify the design and study of attribute-based encryption schemes because the designer can focus on the construction of the simpler encoding for the desired predicate and then you can just use the compiler to get the attribute-based encryption scheme and analyzing the security of these encodings is much simpler. So in the framework of we they define what's called predicated encodings and it's the framework for Atropatum where they define pairing encodings and this is the primitive that we study in this work. Both works were generalized to the prime order setting and there are other subsequent works about pairing encodings and predicating encodings that refine these notions. So we are going to focus on the framework by Atropatum. It started by Atropatum about pair encodings and this is supposed to be the most expressive one. Now let me define what a pairing coding actually is. So a pairing coding with respect to certain predicate consists of four algorithms. The first algorithm just takes certain parameters defined by the predicate and produces an integer. The integer corresponds to the number of common variables. We'll see what these are in a second. And the other two encodings, the key encoding and the sacrifice encoding, they just produce polynomials. This k represents a vector of polynomials on several variables and what I want to point out is that in this set of variables are hat there is this distinguished variable called alpha. sacrifice encoding also outputs a set of polynomials, a list of polynomials on different variables but as you can see this b appears in both, that's why they are called the common variables. And here there is a distinguished variable, the s0. Finally, there is a fourth algorithm that outputs two matrices and there are several conditions on the pairing coding scheme. So one of them is structural constraints, which basically says that this list of polynomials k produced by the key encoding, they can only contain these types of monomials, these types of monomials. Either a common variable multiplies one of these r variables called the non-long variables or the so-called long variables alone. And similarly, sacrifice encoding can only be formed of these type of variables. That said, there is an extra condition which is that if the predicate between x and y holds, then these matrices satisfy the following equality symbolically, so as polynomials. Finally, there is an extra condition called a non-reconstructability which is the one that provides security which says that when the predicate does not hold between x and y, then this quality should not hold for any possible value of the matrices e and e prime. Now let me briefly explain to you how this encoding is then used to build attribute-based encryption scheme. So this is a simplification of the actual compiler, but I think it gives a good intuition of how it works. So we are over a bilinear group, and the master public key will be just the generator of the target group to the power of this secret element alpha and the generator of the first group to the power of the common variables, or alpha and the common variables are the master's secret key. Then keys, secret keys, have this form, so polynomials k are used in the exponent of g2, and ciphertexts have this form, very similar, and notice that the key encapsulation mechanism is just gt to the power of this interesting polynomial between the two distinguished variables. So now this property over here, reconstructability, guarantees that if your x and y satisfy the predicate, then you will be able to combine these terms over here in order to get the key encapsulation in the target group. So the way you do it, you will pair this element with these elements here, and vice versa here, and then apply linear algebra in the exponent. Let's see an example. This is probably the simplest pairing coding scheme for the predicate and the identity-based encryption predicate, where the predicate between x and y holds if x equals y. In this case, n equals 3, so we can say the first algorithm outputs 3. These are the key encoding and ciphertext encoding. Key encoding has two polynomials, ciphertext encoding has only one on these variables. And in this case, those matrices have a lot of reconstructability. To see why, you can simply check that if you multiply s0 by this polynomial and s1 by this polynomial and subtract this other polynomial, what you get is this, and notice that when x equals y, these two terms will cancel out, and you will end up with s0 times alpha. Argin non-reconstructability is more involved, but there is a very nice way of doing it for this example, which is finding an assignment of the variables that vanishes all the polynomials. So you can actually check that this assignment vanishes all the polynomials, but it does not vanish polynomial s0 times alpha, because actually both s0 and alpha are equal one. So if this happens, then it's clear that you cannot have reconstructability, otherwise you will have a contradiction. The following, you can reason in the following way. You derive that s0 must be equal to one, which is obviously a contradiction. And notice that this assignment is using the inverse of x minus y. So actually this assignment is only well-defined if the predicate is false. But that's fine, because that's all we want. This is just an example, but there exist pairing codings for a rich variety of predicates. However, it's natural to ask the question of whether we can transform or combine pairing codings. And in fact, this has been done. So starting from a pairing coding scheme for a certain predicate p, one may want to transform this into a pairing coding scheme for the dual predicate, which is defined as follows. So basically you just swap the roles of x and y. And this has been done by Trappadon and Yamada in 2015. And they also did the conjunction. So starting from two different pairing coding schemes for two different predicates, just build one pass for the conjunction of these two predicates. You can also do this action. This action is very easy. You just run both schemes in parallel. However, how about the negation? Negation is harder. And it was not done before. It has been done in a limited framework of predicating codings by myself. And so this was done in my PhD. And this predicating coding setting, as I said, is less expressive than pairing codings. And this problem was opened for the case of pairing codings, general pairing codings. And it's what we tried to solve in this work. So in this work, we propose a generic transformation that takes any pairing coding scheme and produces a new pairing coding scheme for the negated predicate. So as I said, we solved this problem that was opened for the case of general generic negation of pairing codings. So along the way, we provide an algebraic characterization of pairing codings that can be of independent interest. I'm going to talk a little bit about this later. And our transformation leads to new encodings. For example, we propose the first pairing coding for negated doubly spatial encryption, which I'm going to explain what it is later. And finally, we are going to discuss other implications of our results and why I think this transformation is important. In order to define our generic negation transformation, we're going to modify the way we look at pairing coding schemes. So I'm going to define a new way, a new definition of encoding, which I like to think about that is splitting an encoding into layers. So there'll be as many layers as there are common variables, plus one. So instead of producing polynomials, the way I see the encoding, this is what I call the algebraic encoding, is two algorithms that produce matrices. And there is a correspondence between these matrices and previous polynomials, which is follows. By the way, I am assuming that the key encoding is of a certain specific form, namely alpha only appears in one of the polynomials, and that polynomial is of this form. So this assumption is without loss of generality that has been used many times in literature, for example, by a trapezoid. And one way of saying why it's without loss of generality is applying the dual transformation twice. If you apply a trapezoid's dual transformation, which is an involution, you will get an encoding that is of this form. Our polynomials can be seen as the multiplication of these matrices by the corresponding common variable, and then all multiplied by the non-long variables. And we add c times the long variables. The same structure goes here. And this structure is possible thanks to the structural constraints on the polynomials that they only contain this kind of monomials, respectively. And it's very useful to look at this at bearing codings as matrices instead of polynomials and to look at the reconstructability property in terms of matrices, namely in this case, we say nagibrike encoding is reconstructable if for every x and y to satisfy the predicate there exists matrices e and e prime, such that all these equations hold. So here I'm being very explicit with the dimensions, but it's not very important now. So this means the zero matrix. And this is just a zero matrix whose first element is a one. So the element on the first column and the first row is a one. Everything else is zero. And it can be shown that this set of equations here is equivalent to the previous reconstructability. Why is this useful? Well, this is useful because we can now leverage a very powerful result from the linear algebra that has been widely used in the literature, which says that a certain system is unsatisfiable. There is no v such that a times v equals z if and only if there exists a w that vanishes a without vanishing z, roughly. And this is so powerful and it's what we are going to use in order to build our negated encoding. So you can think of this property as non-reconstructability. So for all solutions, a certain system is not satisfiable. This is what you have when the predicate is false, you have non-reconstructability. Then there is dual world where things are actually reconstructable. So the intuition is that if we can somehow transpose all the matrices involved in our encoding, we can create an encoding for the negated predicate. In this work, we use a modified version of the previous lemma that is closer to what we need, which is as follows. I'm being very explicit with the dimensions of the matrices, but just ignore this. What this lemma says is that if you have matrices A i, B i, C i, and A i, B i, then these two conditions are equivalent. So there does not exist a solution X, Y to this system of equations if and only if there exists a solution to this other system. Notice that the first statement here is very close to our reconstructability condition here. So there is a correspondence between the solution X and matrix E, solution Y and matrix E prime, and you can check that there is actually correspondence between everything. I mean, this lemma has not been designed by chance. It's just exactly what we need. Another question is whether we can use these matrices that define our encoding in transposed manner so that we can leverage this second statement when the predicate is false, which will give us reconstructability of the negated encoding. So here is my negation transformation. The matrices corresponding to common variables, both here and here, are very sparse. Almost all the entries are zero except for a few ones. So now that we have a generic transformation that negates any parent coding scheme, let me talk a little bit about the consequences of this. So first of all, we can negate any encoding. So people were designing both the normal version of an encoding and their negated one by hand, but now we can get the negated version for free. And maybe there is some encoding for which no negated version was known, and this is actually the case of doubly spatial encryption. And I'm very grateful to Atrapathum because he pointed this out. Given a vector and matrix, another vector and matrix, the predicate is one if and only if these two spaces intersect. And there was a predicate for a parent coding for this predicate by Atrapathum. It has this form. But we didn't know how to build this, the negated version of this predicate. And our negated transformation gives us the best of my knowledge, the first parent coding for negated doubly spatial encryption. And it looks as follows. You can pause the video and analyze it slowly, but I just want to point out that this is a parent coding scheme for this predicate. So predicate is one if and only if these two spaces do not intersect. Our results tell us new information about the expressivity of parent codings. So for example, now we know that the set of predicates that can be expressed with parent codings is closed under negation. We do know this before. Why is this useful? Well, this suggests that maybe building parent codings for context-free languages is harder than we think, or it's maybe impossible. We can capture regular languages, but it wasn't known whether context-free languages could be captured or not. But notice that context-free languages are not closed under complementation. And therefore, if we can build parent coding schemes for context-free languages, we would also build parent coding schemes for a predicated class that is strictly more powerful than context-free languages. Maybe achieving context-free languages is not possible. Other consequences are potential performance improvements. Notice that our results tell us that every parent coding can be expressed in this manner, where the common variable matrices are of this form. They do not depend on the predicate. All the part that depends on the predicate is in this loan variables matrix. So why is this useful? Well, this can lead to efficiency improvements. First of all, you can pre-compute this part, which is common for every predicate. And second of all, the part that depends on the predicate is here, and the part on loan variables can be batched very efficiently. Whereas this other part cannot be batched so efficiently. I'll give more details in the paper about this. So in general, our new generic transformation tells us that every parent coding can be expressed on this form. So if you don't see why, well, you can just negate it twice and you'll get it of this form. But in general, maybe there is an easier way of doing it. And having it in this form is very useful because it can lead to efficiency improvements. So I have presented my work on generic integration of parent coding schemes. I hope this gives us a better understanding of this primitive. And also, I mean, it led to new encodings and can potentially lead to performance improvement, so it will be useful to know to really implement these ideas and see what they can lead to. And also as feature work, it will be good to extend our techniques to this new framework by Trapodon and Tomida, where they perform dynamic composition in the standard model of attribute-based encryption. Notice that our techniques are applied to a scenario where there is a Q-type assumption. It's not a completely standard model. So it'll be good to know what the hard techniques apply to this other framework. Very recent work by Trapodon and Tomida. So yeah, thank you very much for listening. That was it.