 All right, thank you for the introduction. So first, let me explain what is an attribute-based encryption, or ABE for short, with an example, targeted broadcast. So suppose Netflix wants to broadcast some TV show, for example, Game of Thrones, season three. So it puts it online, but this is not free content, so it should not be accessible to everyone. So it's going to be encrypted. And we have a set of user, and each user pays for a different package. So for example, Edward paid for the whole series. Carol paid for only season one and two, and David paid for something else. So only the user should get access to the encrypted data they paid for, and only that. So in order to achieve this thing, we are going to give each user a secret key. And the secret key are going to be different for every user, and it will depend on the package. So basically, we can specify any access policy in the secret key. So it's a generalization of the public key encryption. An important security requirement we want is a security against the collision of user. So suppose you have a group of users that join their secret key. They should not learn anything more by combining the secret key, that's what they individually know. So in this case, Carol and David should know nothing by combining their secret key. So in general, we have a predicate P, which is public. And the ciphertext is indexed by some attribute x. Secret key is indexed by attribute y. And a secret key for y decrypts the ciphertext for x, if, and only if, P of x, y is true. So this is just some notation. And here are some examples of predicate P we care about. As simple as you can think of is equality. So P of x, y is true if x is equal to y. This is called identity-based encryption, and we can achieve that in constant size using standard static assumption with bearing. And we can do more. We can do inner product. So the predicate takes two vectors of dimension n. And the predicate is true if this vector are orthogonal. You can do that in linear and constant size or vice versa. You can actually do any trade-off. So you can do size of the ciphertext plus the size of the secret key linear using, again, standard assumption on bearings. You can do more. You can do Boolean formula. And this is linear, linear size. And you can do that all again using a standard assumption with bearings. And you have noticed here that I'm only giving fully secure attribute-based encryption, so the strongest notion of security. So this way, I can actually compare the relative efficiency between all these schemes. And what we observe is the following. Intuitively, what we observe is that the more expressive the predicate P is. And by expressive, I mean the number of access policy you can specify is high, is a high number, the less efficient it is. So for example, IBE is quite efficient. It's the best you can hope for, but it's not so expressive. And inner product is more expressive, but less efficient, and et cetera. So this is what we observe. The question we can ask is, can you do both expressive and efficient? Or basically, for a given predicate P, what is the best we can hope for in terms of the secret key and ciphertext size? So this is the question we partially answer in this paper. So this is our result, informally. For all predicate P, you can think of. For all canonical ABE, and I will show you what it means later, we have the following. The ciphertext size times the secret key size is at least as large as the communication complexity of P, which is also something I'll find later. But so far, you don't need to know what it means to appreciate the TORM. For example, if we apply the TORM to inner product, we can get the communication complexities linear in the dimension, so we get this lower bound here. And as I said before, we can get an upper bound, which has any tradeoff you want in terms of this size. So it's not tight, but it is tight for some range of parameter, and it is the first non-trivial lower bounds for ABE. So what do I mean by canonical? Canonical ABE is an ABE that you can obtain via a compiler, generic compiler, which has been done before, it's not our work, which this compiler takes a statistical primitive, which is called a conditional disclosure of secrets, CDS for short. So this depends on the predicate P. And as I said, it's statistical, so think about something really simple, similar to secret sharing, not really, but similar. And you compile this into a fully secure ABE, which is something much more complicated using a compiler generic, which does not depend on the predicate P, using, for those who know, dual system encryption introduced by Waters. So why do we do that? Why do we care only about canonical ABE? And why don't we give a lower bound for any ABE? Basically, we don't know how to directly give a lower bound on ABE. Instead, what we do is give a lower bound on the communication of the CDS because it's much simpler. This we know how to do, this is what we show. And this will imply a lower bound on ABE for the predicate P. So this is nice because basically, by focusing only on CDS, we abstract away the computational assumption. So this is nice, but of course, it means the lower bound does not apply to any ABE. So what is the scope of our result? So if you are aiming for only selective security, which is weaker than fully secure, then you are not in the compiler, you are not the output of the compiler. So our lower bound does not apply. If you are that is based in the same, as I said, we are sparing for the compiler, and so you are not in the scope of our result. Actually, if you use a non-static assumption, Q-type assumption, you will not be also in the scope of our result because the compiler uses static assumption like D-Lin or composite order. But even if you are using static assumption, D-Lin, you can be out of the compiler. There is some case. So essentially, if you don't use the dual system encryption methodology, you will be out of the scope of our result. But what we observe is that all the ABE that do satisfy these four points, indeed are the output of the compiler. So it's a restrictive subset, but still interesting subset, right? So now we can forget about ABE and focus on the simpler primitive behind it, which is a CDS. So what is a CDS for conditional disclosure of secrets? We have three players, Alice, Bob and Carol. Alice has private input X, Bob has private input Y, and they share a secret alpha. And they want to disclose a secret on the condition that some Boolean predicate P on X and Y is true. So for correctness, we have if the predicate is true, Carol with the message of Alice and Bob should recover alpha. And we have a privacy requirement, which is basically that if the predicate is false, MA and MB should be statistically independent of alpha. So perfect security, perfect privacy, sorry. All right, so something I forgot to mention, Carol knows X and Y in advance, this is for free. So we only care about MA and the size of MA and MB. And if you think about it, you will realize that Alice and Bob has to share secret randomness, W. Okay, just some notation, we just call C the function Carol, which takes X, Y, MA, MB and outputs alpha. So what I want you to notice is that W and alpha are private, then they're only known to Alice and Bob, which is in stark contrast with the ABE, which is a public key primitive. So here we have a secret key primitive. So it's much simpler. And what we are going to prove is some lower bounds on the size of the message MA and MB, which are vectors in some field. Okay, so an example for the predicate equality, P of XY is true if X is equal to Y. We have MA is actually going to be, so we use a pairwise independent hash function. So MA is going to be the hash of X and MB is going to be alpha plus the hash of Y. So when X is equal to Y, you just subtract the message and you recover alpha. And when X is different from Y by pairwise independence, you get alpha plus something completely random. So it's a one-time pad. We have perfect privacy. All right, so this is a simple example. Now, as I said, there's a compiler, there is a compiler that takes a CDS and maps it to an ABE. And this is interesting because it maps MA to the ciphertext, MB to the secret key. And doing so, it will be the same size. So ciphertext will be of constant factors, the size of MB, and the size of secret key will be a constant factor of the size of MB. So proving a lower bound on this will give a lower bound on that. So that's what we do. All right, so now let's actually do the proof or some sketch of the proof. So we have a CDS and we have the following intuition. The pair MA and MB should determine the value of the predicate P of X, Y. Because either in case one, the predicate is true and MA and MB should reveal alpha. Either case two, we are... So here, the P of X, Y is false and MA and MB should completely hide alpha statistically. So it seems like you cannot do both. You cannot reveal and hide alpha at the same time. So we are either in one or two and MA and MB should determine whether we are in one or two. So this is the intuition we had. And following this intuition, we thought about reducing the CDS to something similar, which is called communication complexity. I mentioned at the beginning, which is as follow. We have, it's essentially like a CDS, but now we have no secret. And Caroll, which doesn't know X and Y, only with MA and MB should recover the value of the predicate P of X, Y. So this is exactly what corresponds to the intuition. So we said, okay, we will try to do a reduction of this to this. Another reason we want to do this reduction is that communication complexity have been studied a long time ago and are really well understood. There are many lower bounds and many techniques to prove lower bounds. For example, for inner product of vectors of dimension N, we have a linear lower bound, which is the best we can hope for. So this is why we want to reduce it to communication complexity. So one thing you can think of, the first naive thing. So as I said, intuitively MA and MB completely determines the value of P of X, Y. So why not just send MA and MB from the CDS? And that's it, that's a communication complexity. So actually this is not possible, this doesn't work for the reason, for the following reason. So as I said, we have a lower bound for inner product, which is linear for MA and MB. But for CDS, we can actually do better than the lower bound. We can do one and N, or N and one, as I showed you before. So this shows it's impossible to, the reduction is not as simple as that. You have to do more than just giving MA and MB. So MA and MB alone do not determine P of X, Y. That's what I want to say. So it turns out one way to solve this problem is to give many copies, independent copies of MA and MB. So we start with the CDS, with message MA, MB, W and alpha. And we repeat the process N times, big N times, independently, so these are picked independently. And we send this to Carol. Lots of message MA and lots of message MB and alpha. And now let's analyze this protocol. So first case, P of X, Y is true. Then by correctness, if you give us an input to Carol, X, Y, and any pair, MA, MB or alpha, it will be correct every time because we have perfect correctness. It's simple. And now let's see the privacy. So if P of X, Y is not true, essentially alpha is completely independent of MA and MB for any variable. So whatever input X prime, Y prime you give to C, this is going to be true with probability one half because, okay, I forgot to say it, but here we consider alpha a bit. So here, since alpha is a bit independent of this, basically we succeed with probability one half. And these are independent events. So basically we have two to the minus N, probability of success. And so now what we want to check, what Carol wants to check, if is there any X prime, Y prime? So we have to enumerate of all the possible X prime, Y prime because we don't know X and Y. So we have to enumerate. That's the only thing we can do apparently. So, and then we check if for all tupper, if this is true, if this is true, the predicate is true. And if it's not true, it's very likely that the predicate is not true for N large enough. All right, we just do a union bound of all X prime, Y prime, essentially. But now, our large should N be. So N should be as large as there is a tupper, pairs X prime, Y prime. So if there is many X prime, Y prime, N should be very large. So this is bad because N is going to be large and depends on the size of X prime, Y prime. And actually it's going to give a trivial lower bound. So this is not what we do. So now there is a trick. Instead of, so just notational tricks so far, we rewrite the Carol function on four inputs as a Carol function on only two inputs. So we outcode X and Y into the function. Okay, so for all X and Y, we define a function C of X, Y, which takes only M, A, and B. And so far, okay, we only changed the notation. But now when we enumerate over all X prime, Y prime, what we are checking. So what we are doing here is actually enumerating over all the possible Carol function on M, A, and B. So what we do is enumerating over all the C star that are function on M, A, and B bit strings. And so we are checking this condition here. So this is naive, but actually this trick allows us to do a union bound over some space that can be big, but that only depends on the size of M, A, and B. So if M, A, and B are really short, we get actually a non-trivial lower bound. So this gives already a non-trivial lower bound. But there is even more. So one thing I didn't say, which is important is that Carol, we only usually care about Carol function, which is linear in its input. So it's a linear function. Why? So there is two reasons for that. The first reason is that all the upper bound we know are linear, have a linear reconstruction Carol. And the second reason is that the compiler I showed you before needs that Carol is linear. So we care, really care about the linear function. And if we care only about linear functions, then we have to union bound over much smaller set, exponentially smaller set. So it gives much stronger lower bound. So that's it. So here is our communication complexity protocol. So now we have the size, so we have N times the size of the original CDS and the size of MB hat is N times original size of MB plus one for the alphas. And as I said before, this N begin will depend on the reconstruction function Carol. So if Carol, you can fix Carol to be whatever you want, you can say it's a circuit of small depth and you will have a lower bound. So we have a family of lower bounds for each Carol function. And for the linear case that we care about, we have this lower bound, which is exactly what I stated in the informal theorem. And if we plug the lower bound from communication complexity, we have this, for example, for inner product. So that concludes my talk. So to sum up, we proved a lower bound on CDS, which implies by previous work, a lower bound for a restricted subset of attribute-based encryption. And actually, so this implication is interesting, but the lower bound on a CDS is interesting in itself because it's a way to quantify the communication overhead of privacy. So it's a way to answer the fundamental question of how much does it cost to have a private communication. It's also interesting in its own right. Now, let me mention just two open problems. So as I said, we have lower bounds for any reconstruction Carol, but if we don't assume Carol to be linear, the lower bound are exponentially smaller and the upper bound are the same. So there is a big gap to bridge here, which so it could be interesting to do. And another problem is a CDS for multi-bit secret. So I only talked about single-bit secret. We can actually do multi-bit, so we can think about alpha being a bit string. A naive way to do this is to do a CDS for each bit string in parallel. It's not really efficient, but it's the best we can do. So this is an open problem to do better. So thank you for your attention.