 Hi, I'm going to talk about adaptive security via deletion and attribute-based encryption. This is joint work with Richard Goyo and Brent Waters. So consider we want to do public encryption in a large organization where everyone can send a secret message to another person by encrypting the message with respect to that person's public key. A naive way to realize this is just let everyone publish their public key in a directory. However, this approach may be very inefficient when we want to send a message to a group of people we need to encrypt and send a message to each of them. Suppose these group of people actually have some common features. For example, they all work in the same research group or they are all under or above some certain age. We can probably just send a message, encrypt a message to the people with all these features that can be encrypt, but others without these features cannot. Such a notion was formulated as attribute-based encryption in these two papers, Sahai Waters 2005 and Goyo Pandey Sahai Waters 2006. In an attribute-based encryption scheme, every user has an attribute as an NB binary string. And we have policies, which can be seen as boolean functions that take in NBIT input. These boolean functions evaluate on the attribute and output 0 for reject and 1 for accept means the attribute satisfies this specific policy we want to deploy. To issue keys, we run setup to generate the public key and the master secret key. Using the master secret key, we can derive secret keys associated with different attributes. And to encrypt, we use the public key and we encrypt with respect to a certain policy and the message we want to put into the ciphertext. The policy can be like decryptors must be all graduate students and it expresses a boolean function of attributes as we mentioned before. And only users with the secret key associate with attributes where this policy would evaluate them on them to be one can decrypt the ciphertext and the rest cannot. And this formulation above is often called ciphertext policy attribute-based encryption since the policy, the functionality is embedded into the ciphertext. There is also a dual formulation called key policy attribute-based encryption that embeds the policy into the secret key. And the attribute into the ciphertext. And both of these have constructions that have a lot of applications. For both of these formulations, we also have two levels of semantic security requirement called adaptive security and selective security. So adaptive security is actually what we usually want in practice and it's also called full security. The challenger generates in the security game, the challenger generates a public key and the master secret key. It gives public key to the adversary. The adversary can then make queries for secret keys corresponding to some attributes he chooses and for polynomial many times. This procedure can be seen as the adversary corrupting a bunch of user keys. And this query stage can happen again after the challenge phase. In the challenge phase, he sends in a policy and two messages of his own choice. But this policy must be one that none of the attributes he has queried before or he will query in the query phase after challenge phase. Like none of these attributes he will query will satisfy this policy. And the decryptor that encrypts one of these messages randomly with respect to this policy, the adversary gets the ciphertext and he's not supposed to guess which message is encrypted. This definition, even though natural, is in fact not easy to realize. Therefore, people actually turn to a weaker security notion called selective security. Adversary in the selective security game, the adversary has to first send in the policy it wants to attack on before it sees the public key. And in the full security, it happens in the challenge phase where the adversary can see the public key, can make queries, and then he chooses the messages and the policy. And here, he has to send in this policy and before he even sees any public parameters. Everything else, every other stage, is the same. And many of early works would first try to build this selective security and later works move on to reach adaptive security. Next, we can look at the line of works that try to build a secure ABE. We look at both group assumptions at lattice and both selective and adaptive security. So as we mentioned, early works first to build selective security in terms of group constructions. These early works build selective security from bilinear DFA helmet. And later, there was this well-known due system technique introduced by Waters. And adaptive security was realizing these later works for policies in NC1. They're based on different types of decisional group assumptions. And however, the reliance on decisional assumption seems to be inherent when using this technique, whereas in these early works that only have selective security, we can actually build from search assumptions. Moving on to the lattice side, these works realize selective security from lattice for circuits. However, it was not until recent two years that we finally know something of adaptive security from lattice. And just to mention here, this Boolean searching scheme was actually had a tag in last year. But we would leave it here because it will be actually useful when we talk about looking forward, when we talk about our construction. And so the recent breakthrough, we mentioned, was separate in 2019, realized adaptive secure ABE from lattice. Even though the functionality realizable is only a subset functionality, it is a step towards adaptive security. And it's a different approach from all past approaches. We therefore ask the question, can we expand this approach in some way to make adaptive security hold for more general cases? OK, in our work, we simplified and expanded the framework of realizing adaptive security following a summary 2019 paper. We showed that in a simplified framework, we can instantiate the adaptive secure ABE for subset functionality from both search by linear assumptions and lattice. And we can make the whole framework more clear and understandable. What's our high level approach? Our high level approach is just combining two building blocks in an interesting way. The first building block is selective secure key policy attribute-based encryption with a property that we call deletable. And we just require this underlying key policy ABE scheme to be selective secure, but it needs to be for NC1 circuits, which we already have from some previous papers we mentioned. And we will talk about what deletable property means and how we can realize that from previous work. The second building block is a constrained PRF with deletion conforming property. We'll also talk about that later. And together with these two, we can realize adaptive secure ciphertext policy ABE. OK, now let's go to our first building block, deletable ABE. We demonstrate deletable property as follows. Given a ciphertext, which is an encryption of a message M2 attribute X1, and we can view the ciphertext actually as a bunch of blocks, a composition of a few blocks. And then we can perform an algorithm called delet on the ciphertext. This delet algorithm will take in the ciphertext and the set of indices. It will delete the blocks in the ciphertext indexed by these indices in this set. For example, 2 and 4 here, and we just take away block 2 and 4. In fact, the number of blocks is equal to the length of the attribute. And each block is actually just associated with one bit in the attribute. So we can take a look at this delet thing the other way around. By looking at the attribute first, we can perform this algorithm called restrict that basically just remove some bits in the indices. In indices in this set given. We can probably replace these places with a special symbol like a bot. And then we have this encryption procedure that can encrypt with respect to an attribute that might have some special symbol bot and will still get us a valid ciphertext. And after we see these two different approaches of doing deletion, we say that they are actually indistinguishable once you get a ciphertext. So that basically means whether we encrypt, we first encrypt and then we perform deletion on the ciphertext by removing a few blocks or whether we first just remove a few bits in the attribute that encrypt to this deleted attribute where we can get another ciphertext. And these two ciphertext should be indistinguishable. We call such, we call an ABE scheme with such property. This delete procedure is actually restrict procedures here. And also this indistinguishability property, such ABE will create deleteable ABE. And we get to our second building block is this constraint PRF. Let's first look at a usual constraint PRF. What properties do we have? So a constraint PRF has constraints to do randomness property. So given an original key of the PRF, which we called master secret key here, we can constrain the key with respect to a function F. The PRF would evaluate correctly and normally on the constraint key when the input satisfy this functionality F that we constrain to. However, if the adversary is given a constraint key, it's not able to evaluate the PRF and inputs that do not satisfy the function. More formally, he cannot distinguish evaluation on these inputs using the original master secret key from any uniform random values. And here we only need a single key adaptive secure to do randomness. And another property we need is called adaptive key simulation. That is we let adversary choose a function F adaptively and there's a procedure called key theme to generate a simulated key for the function F. And it would give a simulated key indistinguishable from the real constraint key. That procedure only needs to know this procedure key theme only needs to know function F and security parameters and input output size of the PRF and no need for knowledge of any secret key information. So therefore this procedure can basically be done by anyone given the encryption of F and this will be needed in our encryption later on in our construction. Okay, and now we move on to some special properties we need for our construction to conform with our framework we call deletion conforming PRF. So firstly, we can constrain that when we constrain the master secret key we in fact just need to remove some blocks in the key according to a set of indices. So this would probably remind you of something we just talked about the deletion, the restriction on the attribute that we can just give that when we are given a set of indices we just remove some attributes, some bits in that attribute and this here we do the same thing to the master secret key. But how do we know what indices to delete? How is this created for F? And we actually have another procedure that takes in the description of F and it would just output a set of indices that tell us what to delete, what indices we should remove from the master secret key in order to realize the constraint functionality for F. And secondly, we need a slightly special evaluation procedure that is we can evaluate the PRF on a hard coded input X using a special circuit evaluation. This takes in a circuit hard coded with input X and the PRF secret key as input. It can be constrained or it can be the master secret key and output a correct evaluation as if you do just using a normal PRF evaluation on the same key and that value X. Now we can build our adaptive secure ciphertext policy AB scheme from these two underlying building blocks, the deletion conforming constraint PRF and the readable key policy ABE. The first way we can start by talking about the Keygent algorithm of our ciphertext policy ABE. Our Keygent algorithm takes in a master secret key which is just composed of PRF secret master secret key and the deletable ABE master secret key. And to derive a key for attribute X, we first compute this PRF evaluation on X using the PRF master secret key and we get a value we call T. Then we run the underlying table ABE's Keygent with respect to the following policy. Just recall that this is a key policy ABE so the key derivation algorithm is respect to a policy. This policy FXT takes in a secret key. Looking forward, this secret key will actually be a constraint PRF key that we will generate and it outputs one if we have this PRF evaluation using the hard coded circuit of X or input of the secret key SK. I'll put something that does not equal the T we computed previously and we output zero otherwise. And then we need to encrypt the message. What do we do to encrypt? Well, we just, we're taking the policy F and we can compute this simulated key using the PRF key simulation algorithm and recall that this can be done by anyone because it only needs the description of F and security parameters. And after we get the simulated key, we just encrypt the message using the underlying deletable ABE's encryption with respect to this attribute of the simulated key. And to decrypt it's simple we just run the underlying deletable ABE decryption using the secret key associates with that previous FXT we talk about. We get such a secret key and we can decrypt. And it's probably not very obvious at first sight why correctness would hold for our scheme. We refer the orders to check the details in our papers and if that takes use of these non-cluding properties of the PRF simulated key and the real master secret key. To show security, we basically hybrid as follows. So in our first hybrid we switch the attribute used during encryption from the simulated key to a real constraint PRF key. So recall that in our real encryption we need to do something, we need to encrypt it to a attribute and then we can generate by anyone because this is a public key encryption procedure and we can just run the key simulation on the function F. However, in our hybrids we can switch it to a real constraint procedure on the master secret key with functionality F. And this indistinguishability will just follow from the key simulation property of constraint PRF. And secondly, we use the restrict algorithm to perform the constraint functionality thing on the master secret key of PRF. And we have mentioned this before in our deletion conforming constraint PRF these two operations are actually just equivalent. And this is just a basically removing a few blocks from the master secret key using the set of indices that we need to remove for F and that would just perform the functionality of constraining master secret key to F. And finally, notice that here in the previous two hybrids we performed this restrict or say removal on the master secret key which would give us this attribute that we want to encrypt to and then we encrypt respect to this constraint key attribute. And in the final hybrid we switch the order of deletion. We first just normally encrypt with respect to attribute that is the master secret key of PRF without any constraining using the deleteable ABS encryption. And then we run delete on the ciphertext using the indices to delete for F. So we can still, you know, taking F we can still just generate this set of indices and that wouldn't disturb that wouldn't really affect anything. We can do the deletion on the master secret key or we can do the same deletion on the ciphertext. And then this follows from deletion and distinguishability of deleteable ABE. And finally, we can actually use this to break the selective security of deleteable ABE and suppose the adversary can break the adaptive CPA security of our ABE scheme in the final hybrid. And we can use it to break the selective SAPA security of the underlying deleteable ABE. And finally, how do we instantiate our building blocks? The first thing is deletion conforming constraint PRF. We can realize it a subset functionality just following the construction in these two works. And secondly, for deleteable ABE we can realize that from different sorts of works we can actually modify the following schemes into a deleteable ABE scheme and into deleteable ABE schemes. For the group side, we can modify the GPSW 2006 paper which uses a search assumption on bilinear groups. And for the lattice side, we can do both Boyan 13 and the BGG plus 14. Even though actually recently, as we mentioned, there was this tag on Boyan 13 paper but that came out after the first draft of our paper and we still leave this example in our paper just for the illustrative purpose to see how the deletion, how the deleteable ABE would work. And regarding related works actually, besides all these past works we already mentioned, there was also a concurrent work by Katsumata, Nishimaki, Yamada, and Yamakawa that extend the functionality of SAPARI scheme into an inner product functionality. And they also follow the same high level approach as the SAPARI scheme. Finally, here's our summary. With adaptive constraint PRF, adaptive secure constraint PRF and selective secure ABE, we can realize adaptive secure ABE. But the policy is just a subsets constraint which follows from the policy we can realize for the constraint PRF. And we need the ABE to be deleteable but we can build such ABE from a bunch of assumptions both searched by linear and lattice. Thank you. That's it for our talk. Please refer for more details in our paper.