 Hi, thank you for the introduction. So I will talk about attribute-based encryption. So let me start with the definition. So attribute-based encryption is a public key encryption system, where there are multiple users that have different secret keys. And the idea is that you can encrypt the message in such a way that only some of them are authorized to decrypt the message. So the way that it is formalized is as follows. There is an authority, and she has a master secret key. The master secret key has full permissions. It can decrypt any ciphertext. And then given this master secret key, it is also possible to generate constrained keys. So every secret key, except of the master secret key, is identified by some value that we call the attribute of the key, and this attribute is taken from some exponentially large space, so there can be many keys. And then when someone wants to encrypt, then he runs the encryption algorithm, and he has to provide some function F that we call the policy. And this policy determines who is allowed to decrypt. So this function takes input attributes, and then for each of them it outputs either 0 or 1 to determine if this attribute is authorized to decrypt. OK, so now given a candidate ABA construction, in this work we focused on two properties of the construction. So the first one is the security guarantee. So there is full security, and there is also a relaxed notion that is called selective security, and I will explain the differences in the next slide. And the second property is the support of the class of policies that we can associate to the ciphertext. So of course we want it to be as expressive as possible so we can have more complex access structures. OK, and a special case that will be important for this talk is a notion of IBA that it was actually defined before ABA, but ABA is a generalization of this, but I like to think of it as a special case of ABA. So this is ABA where the supported policies are only point functions, so every ciphertext is targeted only for a single attribute. OK, so let's talk about the security notions. So we want to somehow define the security, and we want this definition to capture this notion of decryption, where the secret key for x can decrypt the ciphertext for f conditioned on the value of f of x. So let's say we fix some function f, so we can now partition the attribute space to two types of attributes, authorized attributes and non-authorized attributes. And then we want to say that even if there is a collusion of users that try to combine their keys, then as long as none of them has an authorized key, then they cannot decrypt the message. So I want to say if there is a collusion of adversarial users and all of them are in the white areas, the non-authorized areas, then they cannot decrypt the ciphertext. So in order to capture this requirement, there is a security game, and this is how it is defined. So there is the challenger. She holds the master secret key, and then there is the adversary that controls all of the colluding users. And the game goes as follows. So first the challenger sends the public key to the adversary, and then there is a query space. So in the query space, each time the adversary can send some attribute x, and the challenger should generate a key for that specific attribute and send it back to the adversary, and they can repeat that multiple times. And then at some point the adversary asks for a challenge ciphertext. So in order to do that, he specifies some policy F, and the challenger should encrypt the message, and then the goal of the adversary is to decrypt the message. And we only care about adversaries that query for keys that are non-authorized, because this is the only case where we need to guarantee security. So we assume that the only query is for keys in the white areas, and then his goal is to eventually break the ciphertext. Okay, so now before I go into any specific construction, I want to talk about how security proof that satisfies this definition should look. So we want to show a reduction to some computationally hard problem. So in other words, we want to say that if there is an adversary that has advantage in the security game, we can use this advantage in order to gain advantage in a computationally hard problem. So we need to simulate the challenger and to interact with the adversary, and eventually to somehow use these responses in order to solve a hard problem. So we have to construct the simulator, and the simulator gets its input in instance, let's say of the DH or LWE, or any assumption that you want. And then we need to simulate all of the responses of the challenger. So it means that we need to simulate the public key, and then we need to answer all of the key queries. And eventually we need to simulate the challenger ciphertext, and we want the challenger ciphertext to be hard, because we want to say that if the adversary can decrypt the ciphertext, then we can use these responses in order to solve the computationally hard problem. Okay, so I want to give intuition why it's hard to construct such a simulator. So okay, we need to be able to answer all of the queries of the adversary, and we don't know in advance which attributes he will query. So imagine that we somehow create a simulator that is able to simulate a key for any possible attributes. So it's nice because now we can interact with the adversary, and you won't notice that we're in a simulation, but the problem is that now the simulator can also simulate valid keys for attributes that are authorized to decrypt the ciphertext. So it means that now the adversary can simulate a function in key for an attribute that can decrypt the ciphertext, and then he can locally decrypt the ciphertext, and it means that he can locally solve the hard problem without even interacting with the adversary. Okay. And because of that, it means that the hard problem cannot be computationally hard because the simulator should run in polynomial time, so we showed an algorithm that could solve the hard problem. Okay, so it means that our simulator needs to specify, needs to satisfy very specific requirements which he should be able to simulate keys for the non-authorized attributes, but he shouldn't be able to simulate keys for the authorized attributes. But why is this difficult to achieve such a simulator? Because the capabilities of the simulator are determined by F, but we only learn F at a relatively late stage of the game, so we need to create the simulator and answer queries even before we know what F is. Okay, so this is why getting full security is challenging, and the immediate way to avoid this issue is to use the relaxed notion of selective security. So in selective security, we simply require that the adversary will announce F before the game even begins, and then it doesn't make it trivial to come up with a scheme that is selectively secure, but it makes the problem easier. Okay, so I will briefly go over previous results. So there are two main lines of work, one of them relies on group-based assumptions and the other one on a lattice-based assumptions, and both of them evolved in kind of a similar way. So first there were constructions that are selectively secure for IBE, and I remind that IBE is ABE for point functions. Then there were constructions that are fully secure for IBE, and then there were constructions that are for a larger class of functions, so ABE, but only selectively secure. And then eventually with the group-based assumptions, there was a break-through with the dual system by Waters, and there were constructions that are fully secure for IBE. And with lattice, this problem remained open. Okay, so we partially solved this problem, not completely, but we show a fully secure ABE based on the LWE assumption, and the supported function class is what we call a TCNF, which is CNF formulas with constant locality of the clauses, so each clause can access only a constant number of bits of the input. So for example, three sets is a three CNF formula. Okay, so our approach takes three steps. So in the first one, we use an idea that is called the tagging technique that was presented by Gentry, and he presented it in the context of group-based constructions. So we managed to construct something that has the same kind of behavior, but is based on lattice-based assumptions, and it also uses PRF. Okay, as a second step, we generalize this approach. So we show that if instead of starting from a PRF, we start from a constrained PRF, then we can get ABE instead of IBE. But in order to implement this idea based on the lattice techniques, we need the constrained PRF to satisfy some special structural properties. So in order to get the final construction, we also need to show a constrained PRF that satisfies those requirements, and this is where we have the limitation on the supported function plus, only for TCNF. So that will be the last step. Okay, so let me describe the tagging idea. So I focus now just on IBE, because this is the context that it was presented for. So now there is only a single authorized attributes, and the adversary can query for keys for all the other attributes. Okay, so here is the idea. You add another dimension to the attribute space. So now there are attributes, and there are also tags. So each attribute is associated with a row, and each tag is associated with a column. So now when you want to encrypt, respective to some attributes, you go to the respective row, and then you randomly select one column, and this is where you generate the ciphertext. And when you want to generate a secret key, you do something similar. So you go to the row of that attribute, and then you randomly choose a column, and then you generate the key that can decrypt the entire row except of that column. So it's like a punctured key that can decrypt an entire row except of a very specific cell. So now in terms of correctness, as long as the ciphertext and the secret key doesn't fall on the same column, then we're good, and decryption should work. So why is this useful for food security? So here is how the simulator will look. So now when we initialize the simulator, we first choose for every row some column, we call it the pink column, and we choose it even before the game begins. And now instead of choosing the cells of the ciphertext and the secret keys, randomly we will always stick with those random cells that we already chose before the game started. So when someone queries for a secret key, we will always puncture it at that pre-chosen point. And equivalently when someone asks for a ciphertext, we will always generate it on that point. So you can already see that now the simulator can answer any ciphertext query and any secret key query, but it still cannot decrypt a challenge ciphertext because his simulated secret keys will not work on his simulated ciphertext. But on the other hand, it is still indistinguishable to the adversary. And this is because if on every row you only see just a secret key or just a ciphertext, then this pink column, it looks random. So this is exactly the same as in the real construction. So the only chance for the adversary to distinguish between those two ways of generating the secret key and ciphertext is if he gets both a secret key and a ciphertext on the same row. But we know by the security game that he is not allowed to make such a query. Okay, so our first result is that we show how to implement this high-level idea based on LWE. And the way that we implement this random mapping is by a PRF where previously in Jean-Pierre's work, it was done with a random polynomial. Okay, so this is the IBE idea and I will not show now the technical details of the construction because I want to talk about ABE. Okay, so let's see how to generalize this idea for ABE. So looking back again on the IBE construction, the ciphertext was intended for a single attribute so we could focus on that specific row. But now we're in ABE, so the ciphertext can be targeted to multiple attributes, which means it can be targeted to multiple rows. So if we ignore for a moment efficiency, we can generalize the tagging idea as follows. So imagine an encryption algorithm that simply goes over every attribute and then if the attribute is authorized, then it generates a ciphertext for that attribute. So for every authorized row, it samples a random column and it generates a ciphertext there and eventually it concatenates everything and outputs it as the ciphertext. So in terms of security, the same argument should work. We can in the simulation just predetermine all of the random columns and then stick to them when we simulate the ciphertext and the secret key. But now the problem is with the efficiency. We have to somehow generate those ciphertexts such that they are small and it only takes polynomial time. So what are our efficiency requirements? And we need to have some succinct description of those random columns that are associated with the ciphertext. And in order for the security argument to follow, we need the succinct description to satisfy two properties. So the first one is that we should be able to simulate this in the pink area when we are in the security proof because we wanted the security proof to always use those predetermined cells. And the other one is that we want this succinct description to not reveal more information about the pink cells than what you must reveal. So in particular, it shouldn't reveal the values of those pink cells on the rows that are not authorized by F because we want to claim that the secret keys are still indistinguishable from the real secret keys where those cells are chosen randomly. Okay, so we need something that satisfies those two requirements. And there is something that exactly fits into that definition and this is a constraint PRF. So what is a constraint PRF? So in a standard PRF, there is the seed and then if you have the seed, you can compute the PRF on any input. And if you don't have the seed, then everything looks indistinguishable from uniform to you. And in a constraint PRF, you can also generate constraint keys and then those constraint keys can evaluate the PRF only on a subset of the inputs. So similarly to ABE, a constraint seed is associated with some policy and this policy determines on which values, sorry, on which inputs you can evaluate the PRF and on which inputs it looks random to you. And for this work, we will need a constraint PRF that only supports a single key and it actually makes things easier because there are constructions that can only support a single key. Okay, so now going back to this high level idea, let's see how to use the constraint PRF. So in order to commit to those pink values, we will simply choose some PRF seed. So we say that the input of the PRF is the row and then the output of the PRF is the corresponding column of the pink cell. And then when we want to generate a secret key, we will simply use the output of the PRF respective to the seed that we already chose and when we want to generate a ciphertext, then now instead of choosing random columns, we will associate it with a constraint seed of the seed that we already chose at the beginning. So by the properties of the PRF, we exactly have the guarantee that now this constraint seed describes only the pink points of the rows that are authorized by F, but it reveals no information about the other points. And so this satisfies the second requirement. And now in order to satisfy the first requirement, we need to somehow change how we generate the ciphertext in the real scheme in order to make it indistinguishable from the way we generate it in the simulation. So what we will do is that we will also describe it by some constraint seed for the function F, but we will start from independently chosen seeds. So we'll have this sigma prime that we generate freshly every time we encrypt and then we will compute out of it just the constraint seed for F and this is how we'll describe the ciphertext in the real scheme. Okay, so I didn't talk at all about lattices or how to actually implement those ideas and I want to just say a few words about it because I don't have time to go into the technical details. Okay, so the construction is based on the techniques that were developed by ABGG at all and their construction shows a selectively secure ABE. So we use their selective technique where the thing we commit to with the selectiveness is this PRF seed. And now because of the technical details of how their construction is built, then we cannot implement it with any constraint PRF and we needed to satisfy some special property. So we call this property a gradual evaluation and I will give some intuition what it means. So fix some input x and now consider two possible ways for computing the PRF value on that input x. So the first way will be by using the master seed and another way would be to first generate a constraint seed and then use this constraint seed in order to evaluate on the point x. So any constraint PRF has a guarantee that the output will be the same with those two kinds of computations but we require something way stronger which are the computations will be equivalent if you describe them as circuits. So it should be exactly the same sequence of gates that you compute whether you use the master seed or you use a constraint seed and this should work for any function that authorizes x. So a very large number of constraint keys. Okay, so now lastly in order to construct the constraint PRF that satisfies this definition, we rely on the work of DKNY. So they showed a constraint PRF for a bit fixing that can support a constant number of keys. So we change a little bit the parameters there and get the construction that supports a single key but a constant locality for CNF instead of bit fixing. And this construction also satisfies the gradual evaluation property. So that's it. So usually if you have any questions, please come down to the microphone. If there are no questions, let's thank Rotem again.