 So, hello all, I will talk about attribute based encryption and its generalizations for non-deterministic finite automata from the learning with errors assumption. It is a joint work with Shweta Agrawal and Shota Yamada and this got initiated while Shota was visiting IIT Madras back last year. So, let us start with the notion of attribute based encryption introduced by Sahay and Waters in 2005, where it is a generalization of public key encryption and where the goal is really to provide an expressive access control over encrypted data, we have seen this from this morning. So, the setting is like this that let us say we have a server where we store encrypted files. These files are encrypted with respect to public user attributes and accordingly there should be a key authority which should be able to generate restricted secret keys. These restrictions come in terms of policies or Boolean functions embedded inside the secret keys in such a way that these keys would be able to decrypt any of these ciphertexts if and only if the attributes with respect to which the files were encrypted satisfy the embedded policies. Further, we want to also ensure that any set of colluding users should not be able to decrypt any of these files as long as at least one of these users are not individually authorized to decrypt them. So, this is actually formalized by the following set of four algorithms set up encrypt key gen and decrypt. So, it takes the set up takes the security parameter or outputs a public key and a master secret key. The encryption takes the public key and an attribute x and a message m to output the ciphertext encoding the message with respect to the attribute x. The key generation algorithm takes the master secret key and a circuit f to output the secret key corresponding to this functionality. And this secret key when it acts upon the ciphertext to the decryption box, it outputs the hidden payload m if the attribute if and only if the attribute satisfies the function and security goes like this for any tuple of a challenge attribute and a pair of messages m 0 m 1 and all the secret keys for functionalities have seen by the adversary. The ciphertext the pair of the ciphertext for these pair of messages m 0 and m 1 with respect to the attribute x should remain computationally indistinguishable as long as these secret keys are individually unable to decrypt any of these ciphertexts. So, this notion of attribute based encryption has seen an elegant sequence of works from 2005 onward with restricted I mean constructions for restricted circuit classes till in 2013 Borbunov et al first showed how to support all circuits from the learning with errors assumption. Now, even if learning ABE for all circuits was realized and circuits are a powerful model of computation, they have their own inherent drawbacks namely they have I mean circuits always provide fixed length input support and their description changes based on input lengths. They also incur worst case run time based on all inputs of certain length. So, naturally our attention turns towards uniform models of computation like or finite automator or Turing machine or RAM where we have arbitrary length input support with a fixed description size which also incur input specific run times. Moreover, we if we have an ABE support for I mean ABE for supporting a uniform model of computation then that gives us the flexibility of giving out a single keeper functionality because the description does not change now based on the input length. So, the question is now that what do we know so far about ABE for uniform models from standard assumptions. So, as we talk show as we saw in the last talk the construction the first construction probably by was given by Brent it was for ABE for DFA with unbounded attribute lengths and unbounded number of keys from the Q type assumption. It is a parameterized assumption subsequently in 2017 Agraval and Singh gave a construction supporting I mean for the same primitive with unbounded attribute length support from the LWE assumption, but restricted to the single key setting. So, and concurrently just now we saw the work by Gong et al it constructs the same primitive from the standard K linear assumption static assumption with unbounded input length and unbounded number of keys. In this context our work constructs ABE for non-deterministic finite automata with unbounded number of keys and unbounded length attributes from the learning with errors assumption. Our construction works in the symmetric key setting, but I mean for all these years after from 2012 there was no progress on how to support non-deterministic finite automata and it was explicitly left as an open problem by Brent in 2012. So, we construct it for the first time and we also additionally leverage our techniques to generalize it to predicate encryption bounded key functional encryption and reusable garbling for NFAs from LWE. Additionally, we show a barrier in obtaining fully collusion resistant FE for DFA's in that in the in the sense that it implies the IO from standard assumption. So, it is it highlights a barrier in obtaining fully collusion resistant DFA FE. So, we also have a concurrent work with Gong et al and subsequent to our work also I mean taking the techniques from our work which constructs the same primitive for DFA based on the same assumption, but we have very different techniques while Gong et al achieves key policy AB, we our techniques are generic and run in a modular fashion which which allows us to get both ciphertext and key policy AB and this will appear in TCC this year. So, in context of the current talk I want to highlight this fact that our construction does not only support NFA, but it also supports this generalized class of uniform circuits with bounded depth actually. So, NFAs are a particular instantiation of this class and we can I mean NFAs are more practically relevant actually. So, we will restrict our attention to NFAs only for this talk. So, let us look at the techniques to construct this primitive. Let us look at how do we model this secret key AB for NFA it is done in the same way just that only there will be only a master secret key which will be used for both key generation and encryption. The main differences are that the attribute length will be of an unbounded attribute will be an unbounded polynomial length and the key generation takes the NFA description as an input. More importantly the key generator does not know the input length now and it has still to it has still output a secret key which should work with arbitrary input lengths. Security is modeled in the same way as before. So, let us look at how do we construct it. We have a two step solution to construct this primitive. The first step consists of constructing this yellow and the red boxes where we have an AB for NFA scheme with unbounded attribute length and with bounded size NFAs and the red box constructs AB for NFA with bounded attribute length and unbounded size NFAs. And in step two we have a way to combine them to get both unbound unbounded in both these coordinates as in unbounded attributes and unbounded NFA machines. So, let us for ease of speech I will refer to this yellow boxes u comma b the other one is b comma u and the green one is u just u. So, let us look at how to construct this u comma b primitive. So, since we are working in the symmetric setting let us take the master's secret key as a PRF key for now. So, the naive idea to construct such a primitive will be just take an NFA and convert it into a circuit and employ an AB for circuits scheme since we know how to construct them. But the thing is that key generator does not know the input length and therefore it cannot just convert it into a circuit. So, at this point we note that the attribute length which is has which has unbounded polynomial length is upper bounded by 2 power lambda where lambda is the security parameter. Now based on this what we can do is that we can have a circuit AB scheme and instantiate 2 power lambda many key pairs from the circuit AB scheme where each such key pair will support inputs I mean attributes of length i ranging from 1 to 2 power lambda. And then we can encrypt a message under the proper public key based on the length of the attribute for the key generation we will convert the NFAM to a set of 2 power lambda circuits where each such circuit will be capable of handling input length i ranging from again 1 to 2 power lambda and use these AB master's secret keys to generate AB keys for each of these circuits individually and get them in a bunch to output as a secret key for the NFAM machine M. Why does decryption work because decryptor knows the length of the attribute at least it can choose which particular secret key to use corresponding to the machine I mean circuit equivalent of the NFAM and decrypt to get back M if M accepts X. Now the first problem here is that the key size is actually too long it's exponential in security parameter and to reduce this we use a simple trick we handle inputs of length the attributes of length only 2 power i so we instantiate the AB scheme instead of 2 power lambda times we instantiate it lambda plus 1 times where we handle attributes of length 2 power i only as in when and if and when the attribute comes during the encryption algorithm the attribute which see the nearest power of 2 and pad it with sufficient number of bots and we encrypt the message under this new attribute for key generation we convert this machine M to a set of lambda circuits lambda plus 1 circuits which are individually capable of handling inputs of length 2 power i and then use the AB master secret keys again to generate keys for these circuits. So, while this works now the next problem is that I mean we have reduced the number of keys to a polynomial count, but the next problem is that each individual circuit AB secret key can have an arbitrary size I mean they can blow up by too much we don't have a proper bound on the underlying circuit sizes M hats of equivalent NFS I mean the NFS equivalent circuits. So, they may have arbitrary sizes we need to bound them somehow so to bound them our first idea is to use the scheme by Bonnet et al from 2014 Euro Crypt which relies on LWM which says that it can ensure that the size of the circuit AB secret keys grows only with the depth of these underlying circuits. Now, once we have only this depth dependency we need to have a bound on the depth of these underlying circuits. So, a naive way to convert any NFAM to any circuit will actually I mean it can be done like this that you can track all the input states while reading every input symbol and this can actually lead to a circuit of depth polynomial in the length of the input and that is what we don't want. So, we actually employ a divide and conquer strategy from the literature complexity literature to evaluate these NFAM and this ensures that the circuit that we have here employing this divide and conquer technique the depth of the circuit scales as a poly logarithmically in the machine size and the input size which in turn ensures that the circuit AB secret key grows only as a polynomial in the security parameter. The third and the most crucial probably the problem in our knife way of constructing is that we had an inefficient key generation. Note that our u comma b key generation algorithm takes a machine name and converts it into a circuit, but this circuit may be too large for even the for the key gen to even read it actually and ideally the time for key gen should not also depend on the input length now. So, the solution to get around this the solution is to somehow redistribute the computation in a way that we delegate the inefficient part to the encryption and decryption algorithms of the u comma b primitive. Now, why encryption and decryption because these two are the only algorithms that can still run in time polynomial in the length of the attribute. So, thankfully functional encryption comes to our rescue here. So, FE as we already have seen in the previous talks it is a generalization of ABE and where secret key corresponds to a circuit and a cipher text encodes the message which I mean these secret keys when used to decrypt the cipher text reveals only the function of the plain text and nothing else. In our context we only need a single key secure functional encryption which can be again instantiated from the works of Goldwasser et al and Agrawal and based on LWB can be still based on LWB. So, let us see how FE helps us here to delegate the computation of NFE to circuit transformation and ABE secret key generation. So, the idea is that instead of converting to a circuit the key generator now takes the input machine and encodes it into an FE cipher text. Parallel the encryption algorithm takes the attribute and the message m and it converts I mean it generates an FE secret key for a circuit which embeds this input length I mean attribute length hardwired in it. Now given these two are available to the decryptor it can run the FE decryption on the fly to get back C sub x of m by the functionality guarantee of FE where C sub x of m is described as follows it takes the machine m as input it computes based on the hardwired input length of the attribute it computes a suitable ABE key pair which can handle inputs of length x mod x and convert the NFAM to a circuit actually inside the circuit to this circuit m hat which has input length mod x and then use the ABE master secret key to compute and output the ABE master ABE secret key for this circuit. Now once this is output by the FE decryption the encryption algorithm also actually encrypts as a part of the cipher text it actually outputs the ABE encryption of the message m under the proper public key under the same public key actually with which for which the same master secret key was used to compute the secret key for the NFA circuit equivalent of NFA and when these two are available to the decryptor it can actually run the ABE decryption algorithm to output the message if m accepted x. Now while this template works we still need to be a bit careful about the implementation of FE here we don't need the cipher text to scale up with the input size again otherwise we fall in the same trap again. So for this we use us we use the single key sussing TEFI scheme by Goldwasser et al where it's ensured that FE cipher text scales only with the depth input and output of the circuit and not with the size and we carefully bound the depth into input and output to be polynomials in security parameter. And as a result what we have here is that the u comma b key generation and encryption algorithms runs in time proportional to the size of the machine and the size of the attribute. Further we have the decryptor generate the ABE secret key on the fly which is needed for the final decryption and but at the same time because this FE secret keys supports the circuit and the circuit takes the NFM machine as input the machine we need a bound on the machine that's why we have support for unbounded attributes but with bounded size machines. So we need a bound on the machine and this works this is the high level idea of how we construct the u comma b scheme. So as a summary let's look at the problems and their solutions that we faced from the nice solution. We had an exponentially long key for which we used attributes of length to power i. The individual ABE secret keys could be too large to handle this we instantiated with a suitable ABE scheme and ensured that the circuit depth is polynomial in security parameter. Thirdly we had an inefficient key generation where we used the Sussing Tefi scheme to delegate this inefficiency to the encryption and decryption algorithms. So now let's look at how do you construct this b comma u primitive. Note that this attribute length is now bounded since it is bounded it's it can be known to the setup algorithm and therefore it is known to the key generator and the encryptor and therefore we can actually employ a circuit ABE scheme directly to convert NFAs into circuits and instantiate it. Just we need to ensure that because we are we want to handle unbounded size NFAs we have we need a depth guarantee on these over NFAs of any arbitrary size. For that we can again use the same divide and conquer technique to have a depth guarantee on these circuits. So once this is done we now see how to construct this u primitive unbounded length inputs and unbounded NFA machines. The high level idea is again to break up the computation in two parts where the size of the attribute is greater than the size of the machine and vice versa. We are working in the symmetric key setting so therefore we will again have a master secret key as a PRF key which can now inherently define a sequence of master secret keys from the underlying u comma b scheme. Each such master secret key is capable of handling NFAs of size i ranging from 1 to 2 ball lambda. Parallely this we can have a PRF key which is able to define all these master secret keys from the b comma u scheme which are capable of handling inputs of length to power lambda as in input attributes of length to power lambda. Now how does the encryptor and key generator work? Let us look at them parallely because encryptor knows the length of the attribute but it but not the size of the machine and key generator knows the length of the machine but not the attribute. What encryptor can do it samples master secret keys from this u comma b scheme mod of x with NFAs of size supporting mod of x I mean 1 to mod of x and generate ciphertext under each of these master secret keys. Accordingly the decryptor can actually sample a secret key with the size of the machine m from the unbounded scheme and we can once we know that the size of the attribute x is greater than the size of the machine. We know that one of these ciphertexts actually corresponds to I mean was generated with the master secret key capable of handling NFAs of size m and therefore, we can actually pair this key with the one of that one of those ciphertexts. So, what if the other thing happens as in the size of the machine grows more than the size of the is more than the size of the attribute then we just roll the swap the roll of the two u comma b and b comma u schemes where the key generator now produces a modern secret keys modern secret keys each capable of handling I sized attributes where I ranges from 1 to mod of m and encryptor gives a ciphertext with the b comma u scheme which is capable of handling only mod x sized attributes. So, once this is done the decryption compares what I mean compares the length of this machine and the length of the attribute if the attribute size is greater then it uses the unbounded scheme to combine it unbounded attribute scheme otherwise it uses the unbounded machine scheme to combine these ciphertexts and attributes these ciphertexts and secret keys and run the decryption. So, in summary we we run these two underlying schemes u comma b and b comma u in parallel to make the decryption work and to which supports both unbounded x unbounded attributes and unbounded m now using the same techniques we can generalize it to get predicate encryption bounded key functional encryption and reusable garbling for NFS from learning with error assumption and last but not the least let us see how to get IO from the secret key DFA the high level idea is to again look at this way look at it this way we if we have an NC1 circuit we can actually use Barrington's theorem to convert it into a branching program and then leverage the similarities between a branching program and a DFA while the input length is fixed to convert it into a DFA actually and once we have this tool with us if we have a secret key FE for DFA we can employ this tool with this thing to obtain secret key FE for NC1 circuits and this as recent results have shown this is good enough to imply indistinguishability of first case which actually highlights a barrier in obtaining fully collusion resistant functional encryption for DFA's. So, let me conclude by saying that we have the first constructions of ABE and its generalizations of NFA from LWV we also illuminate a barrier and our techniques are new we hope that they may find more applications in similar and different contexts we also want to see how to support Turing machine and RAM and last but not the least again our primitive is restricted to the secret key setting we want a generalization to the public key one. So, thanks for your attention. If you have a question please come to the microphone yeah let's thank all speakers again.