 Hello, everyone. In this talk, I will present our work about building ABE for DFA, which achieve adaptive security under the K-Linear assumption. This is a joint work with HotekWee. Let me explain the notion of ABE for DFA first. ABE stands for attribute-based encryption. In an ABE scheme, a ciphertext encrypts a message M and is associated with an attribute X. A secret key is associated with a policy F. Here, we show three keys associated with three distinct policies. When the attribute on the ciphertext side satisfies the policy on the key side, the key can be used to decrypt the ciphertext and recover message M, such as the key for F1. Otherwise, we require the key holder cannot get any non-trivial information of message M, such as those for F2 and F3, in the example. This should hold even when more than one key holder is cloned with each other and probably combine their keys in some way. This is the standard security notion of ABE. DFA stands for deterministic finite automaton. ABE for DFA is an ABE with policy F being a DFA and attribute being an input of DFA. In general, a DFA of q-state and alphabet sigma is defined by a transition function delta and a set of accept-state F. We always have state 1 as the unique start state. The computation is carried out as follows. The machine starts with state 1 and read input x bit by bit. Every time it read one bit of the input, it switch its own state according to the transition function delta. Finally, after reading all bits of the input x, it accept the input if the current state belongs to F. We show an example of DFA here and explain how to do the computation on this example in more detail a bit later. The DFA here has three states 1, 2, and 3. The alphabet consists of two symbols A and B. So the input is a strain consisting of A and B. The machine always starts from state 1 and have one accept state 3. In general, there can be more. The transition function is described by the arrows in the graph. For example, the red arrow here says that the machine switch from state 1 to state 2 when it reads A. The other two arrows can be translated analogously. In fact, DFA can recognize regular languages. The example here accept all strains with A, B as the head like the example shown here. Describing policies as DFA has a crucial advantage over a circuit. The input x can be arbitrary long while keeping the size of a policy compact. This is necessary for some scenarios where we don't know the input size in advance such as network log analysis, text return, and web page virus scanner. The two typical concerns on the security of ABE is the security model and underlying assumption. For the security model, there are two important models, selective and adaptive model. In both models, the adversary is given public key, several secret keys, and one challenge ciphertext. The selective model is weaker in the sense that the adversary is asked to commit the target attribute x for the challenge ciphertext at the very beginning, but the adaptive security model has no such restrictions. So it would be better to have a scheme probable in the adaptive model. In this work, we consume schemes in the bilinear groups. The two main assumptions in use are K-linear assumption and the Q-type assumption. We pursue schemes under K-linear assumption for the following three reasons. First, the K-linear assumption is weaker, which is desirable in general. Second, due to the similarity of K-linear and LWE, this might be the first step towards instantiations based on the LWE assumption. Finally, the K-linear assumption is much less complex than the Q-type assumption. Typically, it becomes quite challenging when we construct a reduction using the K-linear assumption instead of the Q-type assumption. And new conceptual approaches are required in this case. The research on ABE for DFA was initiated by Waters in 2012. The proposed scheme is selectively secure and the Q-type assumption. With the powerful dual system method, adaptively secure schemes were proposed but still under the Q-type assumption. Recently, some progress has been made to get schemes based on standard K-linear assumption. However, the selective model came back. So, it remains an open problem to build adaptively secure ABE for DFA under the K-linear assumption. In this work, we resolve this open problem. We mentioned that our technique works for both DFA and branching programs. This gives us a new ABE scheme for branching programs, which achieves compact ciphertext in the so-called unbounded multi-use setting. We also mentioned that there is a concurrent work by Lin and Luo at Eurogroup this year. But the two works employ different techniques. In our work, we take the recent scheme by Waters and we as the starting point. We will call this GWW scheme from now on. Note that the scheme is based on the K-linear assumption as we want, but the security model is selective. Let me give a review of their strategy and technique. It's a key associated with F and a ciphertext associated with X. Both functionality and security rely on information of computation of DFA, that is the states we reach while computing FX. For the functionality, they take computation of DFA as evidence for the equation. This is also the paradigm inherited from the first ABE for DFA by Waters. For the security, they embed computation of DFA in the reduction. This is also like Waters' proof based on the Q-type assumption, but the crucial point here is only local information of computation is required for each reduction. This allows them to finish the proof with simpler K-linear assumption. In comparison, Waters' proof embedded all the information into the reduction where the complex Q-type assumption comes in. However, this information still depends on both F and X. In the adaptive model, the simulator may not know the input X at all when simulating keys. Therefore, they prove the security in the weaker selective model where X is claimed at the very beginning. To carry out this strategy, they must assign two distinct computations of DFA to functionality and security respectively. In more detail, the so-called forward computation is for functionality while the so-called backward computation is for the security. Let me explain the two ways to compute DFA using this example first and then go back to the GWW strategy. The forward computation is the normal way of computing DFA as I mentioned before. We start with state one. The machine reads the first bit, which is A. According to the transition functions, the machine should switch to state two. Now, after reading one bit, the machine reaches state two. It then reads the second bit. Again, we check the transition functions. The machine should switch to state three this time. Now, the machine has read all bit and at state three, which is accept state. Therefore, F will accept the input X. The backward computation works analogously to the forward one, except that we do everything reversely. We start from the accept state instead of start state and read the input from right to left. Also, we use transition function reversely. Let us see the same example F and X. We now start from the accept state three. The machine first reads the last bit of X, which is B. The transition function tells us that when reading B, there are two possible states, state two and three, will lead the machine to state three. Then, the machine will switch back from three to state two and three. Here, we are actually using the inverse of the transition function. Now, the machine is at the state two and three at the same time. Then, the machine read the next bit A. Again, the transition function tells us that when reading A, there is one possible state that will lead the machine to state two, that is one. There is also one possible state that will lead the machine to state three, which is state three itself. So, the machine switched back to state one and three from current state two and three. Now, the machine read all bits and add state one and three. Since the start state one is reached, the machine accept the input X. One might notice that the two computations of DFA, which is the showed, give the same result. It is true in general. The forward and backward computations are equivalent in this sense, but have different properties. Observe that the forward computation always hit one state at each step. While the backward computation hit a set of states at each step, we can write down every state the machine will reach at each step recursively in general. Then, we can easily check the property I just mentioned. With the definition and the properties for forward and backward computation of DFA, we can review GWW strategy in a bit more detail. In strategy, each step in the forward computation of DFA corresponds to a partial result we need to compute during the decryption. Each step in the backward computation of DFA defines a hybrid in the security proof. This means that we need to know one state to compute every partial result. And, we need to know a set of states to simulate every hybrid in the security proof. Here, it is the set FI that describes the so-called local computation of DFA we have mentioned. This makes a proof under simpler K-linear assumption possible. And clearly, it depends on both F and X. The very first idea towards adaptive security is to apply the piecewise guessing technique to the selective security scheme. However, we have to guess the set FI which calls exponential security laws in Q since X is unknown at the beginning in the adaptive model. Even though we must know that this has been interesting since we avoid guessing input X directly which calls even larger security laws since N is typically larger than Q. However, this line isn't that. Remember, we have two distinct ways to compute DFA. Our idea is to switch the role of forward and backward computation. Namely, we will use forward computation for the adaptive security which means backward computation will be used in the decryption for functionality. By this, in the piecewise guessing framework, we only need to guess a single state in the adaptive hybrid which means polynomial security laws. To implement this strategy, we generate secret key for reversed F instead of the original F and generate a ciphertext for reversed X instead of the original X. We explain reversed DFA with reversed input using this example. The reverse of DFA F is obtained by first reversing the transition. Let us look at the blue arrow. It says that the machine switched from state 1 to state 2 when reading A. Then in the reverse of F, it becomes a transition from 2 back to 1 when reading A. In the graph, this means we just reverse the arrows. We do the same to all other arrows. Generally, once we have a transition from U to V in DFA, we will have a transition from V back to U in the reverse DFA. We also need to swap the star state and accept the state. This gives the reverse of F. The reverse of input AB is just a BA. In the graph, we just read it from right to left instead of from left to right. By GWW strategy, for the security, we need to use the backward computation of reversed F. By the definition of reversed F and X, this is basically the forward computation of the original F. Then we indeed get the property we need for the security proof, that is, we only need to guess a single state in the hybrid. At the same time, for the functionality, we use the forward computation of reversed DFA. Similarly, this is basically the backward computation of the original F. This is indeed implement our idea, but it is not sufficient to resolve the problem. We have new challenge on functionality. GWW scheme only supports DFA as policies. That means decryption computes some partial result for a single state. We don't know how to define partial results corresponding to a set of states. But we might learn from GWW scheme. Let me give more details about the key and the ciphertext structures using the example of reversed F. The machine has three states, so the key includes three randomness, D1, D2, and D3. Each of them correspond to one state. The input has two bits. Remember, the machine will use three steps to read the input, including the first initial step. Therefore, the ciphertext has three randomness, S0, S1, and S2. Each of them corresponds to each step. In general, Q states give Q randomness in the key, and the m bit input gives m plus one randomness in the ciphertext. Then, in the I step, the machine reaches state ui after reading i bit of x. The partial result is defined as this. Basically, it multiplies the randomness si for step i with randomness dui for state ui in the exponent. A direct extension of partial result from a single state to a set of state is to define it as a set of values each of them correspond to a state in the set fi. However, this is insecure. But note that we fix the issue by defining the partial result as the product of all these values instead of giving them out individually. This gives us an ABE scheme for the reversed DFA. In summary, to obtain our ABE for DFA with adaptive security under the K-Linear assumption, we combine the recent selectively secure ABE for DFA with piecewise guessing framework. However, a direct combination leads to an exponential security loss. So, we additionally develop the reversed DFA idea and build ABE for the reversed DFA so as to manage the security loss. Besides that, our work includes more results. First, a slight extension of our ABE for reversed DFA gives the scheme for a restricted NFA in selective model. Then, the technique can be used in the context of branching program, as I have mentioned. Third, all the schemes we just mentioned are key policy, but their ciphertext policy variant are also provided. Finally, I mentioned two open problems. First, it is interesting to have a new ABE scheme for the standard NFA, that is to remove the restriction we set in our work. Second, it would be exciting to have an analog of our technique in the lattice world, especially from the LWE assumption. Thank you for your attention.