 We are going to hear about Verifiable Functional Encryption, this paper is by Sikrishna Vipul Aayush and Amit from UCLA and from MSR and Sikrishna will give the talk. Thanks for the introduction. Hi everyone, I am Sikrishna. I am going to be talking about Verifiable Functional Encryption. This is joint work with Aayush who just gave a talk before, Vipul and Amit. So, before we move on to what the problem is, let us look at classical encryption and let us say there is some user Bob who has some data X and he encrypts it and sends it over to some cloud and the red blob denotes the ciphertext. Now, suppose this cloud has the secret key for the encryption scheme, then it can recover the message, the plain text message X in the clear, but suppose the cloud does not have the secret key, then it learns nothing at all about the message that is encrypted. This is the classical notion of encryption that we all know. So, in some sense this has an all or nothing paradigm in the sense that you either learn everything about the message or nothing at all. So, what is functional encryption? Let us say again we have the same user X and he wants to talk to a cloud service provider, but now we want to give more fine-grained access to the private data. So, as before Bob encrypts his data and sends it over to the cloud, but now the cloud wishes to compute some function F on this encrypted data, on the plain text that is underlying this encrypted data. So, he talks to a trusted party who has the master secret key behind the entire system and he tells the party that I want to compute this function F and suppose the party, the trusted party thinks that the cloud is indeed authorized to compute this function F, it returns back a secret key for this function F. Now, using this secret key and the encrypted ciphertext, it can compute the underlying, it can compute the function on the underlying plain text that is it learns the value F of X. But what is security guarantee that we want? We want to say that suppose the cloud service provider the decryptor is malicious, we do not want him to learn anything about the message that was encrypted apart from F of X. So, intuitively we will formulate this a little later, but intuitively we want to say that the secret key for the function should not allow the adversary, the decryptor to compute anything other than the value F of X, given the ciphertext. And function encryption is a very general paradigm, it is a generalization of attribute based encryption, predicate encryption and so on. So, before we move on, let us just formally describe the syntax of function and encryption scheme. There is a setup algorithm that outputs a master secret key and a master public key. The master public key is given out to everyone in the system and they can use this along with their own message X to produce this, to run the encryption algorithm and produce the ciphertext. And now whenever a decryptor wants to evaluate some function F, it can talk to the trusted party who runs the function key generation algorithm using the function F and the master secret key to compute the function secret key. And now the decryption algorithm takes any ciphertext and the function secret key to compute the evaluation of the function on the underlying plaintext. What is the security requirement? Let us look at it in a more formal setting. We will call this indistinguishability security or the insecurity. Consider the adversarial cloud who has the function secret key for some function F and now we will consider two worlds. In the left world, the user Bob encrypts some message X that he has and in the right world, he has some message Y which he encrypts and sends it to the cloud. And as we saw earlier, we want to say that the adversary cannot distinguish between the left and the right worlds, right? Because this is what indistinguishability security would mean. But clearly this is impossible to achieve because in the left world, he learns the value F of X and in the right world, he learns the value F of Y, which was the correctness requirement we wanted from this notion and so he trivially can distinguish these two worlds if the F of X is not equal to F of Y. So, we will relax the security notion to say that if the output of the function in the left world and the right world is the same, that is F of X is equal to F of Y, then the adversary should not be able to distinguish between the two worlds. So, this is a simplified version where there is just one secret key and one ciphertext that is given to the adversary. But you can imagine a setting where we generalize this to multiple secret keys and multiple ciphertext with a similar constraint that we impose. So, what is an example of functional encryption in a more practical scenario? There are several banks and the world bank is the trusted master authority and now there is an auditor who wants to audit all these banks and check whether they are performing correctly or they are doing whatever they are supposed to do. In particular, suppose he wants to compute a function F on the data that all these banks have, let us say one bank with some data X, he encryptions send it to the auditor, the auditor gets the function secret key for F and he computes the value F of X and everyone is happy in the system and we want to say that if the auditor is malicious, he should not learn anything about the bank's sensitive data other than the value F of X that he was indeed supposed to learn. So, this is the motivating example and this is one of the example provost one of the earliest works on functional encryption in GPS W06. So, now let us see if there are any drawbacks with this motivating example. Again, let us look at the audit, the case of the auditor and let us say there are two banks now, one with input X and the other with input Y and they both send their encrypted data to the cloud, sorry to the auditor and the auditor computes F of X and F of Y and everyone is happy. But suppose this second bank is malicious and it has done some fraud and does not want to give the actual data to the auditor. So, instead it does not encrypt Y, but it sends some other garbled version of some random string and then the auditor does not learn F of Y but instead learn some random value Z. So, this there is no guarantee that functional encryption provides in such a setting because it only talks about a malicious decryptor. But here we have a malicious encryptor who is not performing as you are supposed to do. Let us consider a more interesting scenario where it is not the bank that does something wrong, the bank still sends correct value X that is encrypted and the auditor wants to learn F of X and G of X but the word bank is corrupt for some reason. Maybe it talks to the bank with value X and decides that it would get some money from the bank for the fraudulent activity and then it does not give the correct function key for G. It gives some other wrong function key that behaves like a function key for G in the sense that for maybe all inputs other than X it acts like the correct function key for G. But just this input X it does not work correctly. The auditor has no way of knowing that this value the function key for G that he receives is not the correct function key for G and then now he is just confused because he believes that this value Z that he learns is the value G of X which was not. So, in this scenario we also want to protect against a malicious authority the authority which generates the public keys and the function secret keys and the traditional notion of function and encryption offers no security guarantee in this setting. So, in this work he proposed the notion of verifiability which says that suppose there is some user with the master public key and he has some ciphertext he can run a publicly verifiable algorithm on the ciphertext and he gets back whether the ciphertext was correctly generated by correctly generated generated I mean that there was some message which was used to run the actual encryption algorithm to produce the ciphertext. This public verification algorithm does not tell the user what the message was inside the ciphertext. So, he is still happy and there is no security loss with respect to the traditional functional encryption. So, now he knows that he has a well formed ciphertext and he gets another function secret key for his function F from the authority again he runs another public verifiable algorithm and the very the algorithm tells him whether this function secret key was generated correctly by running the actual function key or secret key generation algorithm. So, now he has a well formed ciphertext and a well formed function secret key and he can run the decryption algorithm to recover the value F of x and he is guaranteed that this is indeed the value F of x that he wishes to learn and he is happy. So, to put it more formally what is verifiability is there it says that for all master public keys generated in the system and for every valid ciphertext by validity I mean that you run the public verifiable algorithm and it tells you that the ciphertext is valid. There exists some fixed message x that is underlying inside the ciphertext this is the plain text such that for every function F and every valid function secret key SKF when you run the decryption algorithm using this ciphertext and the function secret key you are guaranteed to receive the value F of x is output. So, what are the results in this setting? We first show that simulation secure verifiable Fe is impossible. So, the previous security definition of Fe that we spoke about was indistinguishable security and we show in this setting that simulation security is impossible. And our main result is a generic compiler that takes any public key insecure Fe and transforms it into a public key insecure and verifiable Fe scheme. And we do not require too many additional assumptions for this transformation I will get back to the assumption needed a little later. We also have generic compilers in the setting of secret key functional encryption and multi input functional encryption. And also we look at the closely related notion of obfuscation and give a generic compiler from an indistinguishable obfuscation scheme to verifiable one. And we also show that verifiable Fe can be used to construct functional commitments which was recently introduced primitive. So, if you instantiate our transformation with several functional encryption schemes and even their weaker notions like predicate encryption and attribute based encryption you can you get several schemes for verifiable functional encryption under various assumptions. Starting with just verifiable identity based encryption which can just be based on the BDH assumption and all the way up to verifiable functional encryption for all circuits which can be based on IO. So, first let us see whether the problem is trivial can it be solved very easily. Let us say we have a underlying functional encryption scheme that we want to transform into verifiable one. Let us just run the setup of this functional encryption scheme and generate the master secret key and the master public key. Now, whenever we want to encrypt some message X we just compute the underlying encryption as before by running the encryption algorithm of the functional encryption scheme and just compute a non interactive zero knowledge proof that the encryption was done correctly. Now, if the proof verifies then you guaranteed that the encryption was done correctly and you can easily verify the proof publicly so everything is easy. So, there is a small catch here in the sense that in order to generate the NSIC proof you need a common reference string and this has to be generated at the set of time. But remember that our goal was to protect against malicious authorities and so the authority is the one who is generating the common reference string itself. So, if he is malicious he could generate a malicious CRS and compute fake proofs and the whole guarantee of NSIC is gone. So, therefore we cannot generate our NSIC and we need some other techniques to solve the problem. Let us look at a closely related proof system called as a NIVI which is a non interactive witness and distinguishable proof which has the following. Suppose there is some statement X and a prover wants to prove that the statement X is in some language and let us say that two witnesses W0 and W1 that are given by the verifier to a prover and now the prover picks one of those two witnesses at random and gives a proof using that witness to the verifier cannot guess which of the two witnesses is used to generate the proof. So, this is much weaker than NSIC because NSIC the verifier learns nothing at all about the witness but here he just does not learn which of the two witnesses is used to prove. So, let us construct let us construct let us try to construct a verifiable NIVI using NSIC instead of NIVI sorry using NIVI instead of NSIC. So, again we consider the underlying function encryption scheme but this time instead of just use it twice in parallel. So, in the setup algorithm we run it twice to generate two public master public keys and two master secret keys. So, the red and green blocks to know the red and green things to know the each individual function encryption scheme. So, now whenever you want to encrypt a message X construct a red blob which is a red ciphertext using the red encryption scheme and a green blob using the green encryption scheme and now you construct a NIVI proof that says that either the red blob encrypts X or the green blob sucks. So, note that you have two witnesses here either the witness for the first encryption the red one or the witness for the green encryption and now your ciphertext would just be these two encryptions and the NIVI proof. Similarly, whenever you want to generate a function secret key you generate two function secret keys using the red and green systems and you prove using a NIVI again that one of them was indeed correct. So, does this work let us see what is the decryption algorithm be the decryption algorithm just decrypts the red ciphertext with the red blob and the green ciphertext with the green blob and checks if both of them are valid or we could even restricted to say that the decryption algorithm just checks if one of them is correct or or maybe just simplicity we can say the decryption algorithm checks if both the ciphertext if both the system give the same value if so it outputs that value if not it just outputs both. So, let us say whether let us see whether this gives us insecurity. So, let us let us look at the challenge ciphertext we initially start with an encryption of x and we want to move to an encryption of y. So, let us start with encryption of x and in the first hybrid we prove that the red blob was correctly generated in the second hybrid we again prove that the red blob is correctly generated, but transform the green blob to be an encryption of y instead of being encryption of x why is this hybrid distinguishable from the previous one because from the security notion of functional encryption the underlying functional encryption we know that f of x is equal to f of y. So, there is these two hybrids are indistinguishable. In the next hybrid we change the proof we prove that the second blob correct was correctly encrypted. And so now it is the first system is free it is only the second system that is being used to prove. Therefore, in the following hybrid we can change the red blob to be encryption of y and now notice that this is a correct encryption of y and we have moved easily from an encryption of x to encryption of y. So, this is achieved in security, but do we also achieve verifiability with this construction let us look at the construction again. Now, consider a ciphertext that was proved correctly using the red blob and a function secret key that is proved correctly using the green system. In this setting notice that there is no verifiability that you are there is no verifiability that you achieve and why is that because when you decrypt the red ciphertext which was correctly generated with a maliciously generated red function secret key you do not recover f of x in that scenario and when you decrypt a green ciphertext that was maliciously generated with a green function secret key that was correctly generated again you do not recover f of x. So, intuitively what does this tell us this tells us that you need a majority of correct systems on the both left and the right side. So, that at least one of them would be would have correct function secret keys and correct ciphertext. So, that when you decrypt you get the correct value. So, the take away from this strawman construction is that verifiability is the majority of correct systems to be proven. So, let us look at another attempt with now three systems a red, green and a blue one and in the and now we prove using the maybe that two of them was correctly encrypted to generate the ciphertext and two of them were correct function keys in the function key generation algorithm. So, let us solve the problem. Unfortunately, we cannot argue message hiding or indistinguishability security why is that? So, initially we start with an encryption of x and we show using the first and the red and the green system that they were correctly encrypted. So, the blue system is free to change as per our will. So, in the next type we will change the blue system to be an encryption of y. So, now what do we do? We are stuck because we will have to move we will have to make either the red or the green system free. So, that we can change it to be an encryption of y now, but if we move the arrow from either the red or the green system to the blue one. We cannot argue that one let us say we move the middle arrow from the green one to the blue one. We cannot argue that the red blob and the blue blob are encryptions of the same message because clearly we can see that the first one is an encryption of x and the third one is an encryption of y. So, we are stuck here. And what is this teller? This tells us that in order to prove indistinguishable security to switch from encryption of x to encryption of y you need a majority free systems. So, that when you have a majority free systems you can change each of them one by one to an encryption of y and then switch the proof over to the right side. But what about verifiability? Does this even give us verifiability? This was the construction that we had right. It turns out that this we cannot even argue verifiability in this setting. And this is because let us say this was the ciphertext that proves that the red and the green systems were correct and the function secret key that proves that the green and the blue function keys were correct. So, when you decrypt individually each of them it is only the middle pair that gives you f of x and the red and the right pair might give you some garbage values because of at least one value in each pair being maliciously generated. So, if you have just one of the three giving the value f of x how do you even know which one is correct? Unless you have a majority correct values you cannot be certain that that is the value which is f of x. So, the take away from this construction is that for verifiability you just don't need a majority systems to be correct. But in fact, you need a common majority of correct systems in every function key and in every ciphertext. Notice that if you have a common majority of correct systems then you can definitely decrypt and be happy that all of them the majority decrypted ones are the value f of x. So, just to stress again this is the main bottleneck that we face that in order to prove verifiability you need a common majority of correctly proved systems and in order to prove message privacy or indistinguishability security you need and a majority of free systems that are not proven to be correct that are not hidden that are not bound by the so these two goals seem contradictory because in one case you want some systems to be free they shouldn't be proven at all and you need a majority of such systems and in the other case you want a majority of systems to in fact be proven to be correct. So, we need some new techniques. So, the first idea that we would use we will use to solve this is we will introduce the idea of a lock a lock just like a commitment. And we say that now let's look at 5 systems and I am not going to list all the colors you can see what the 5 colors are and then we compute 5 ciphertext using all these underlying systems and instead of just proving that a majority of them encrypt x correctly we prove that either a majority of them that is 4 of them more than a majority a 4 of them encrypt the value x correctly or we introduce a new trapdoor system that says that it's ok only 2 of these 5 are going to encrypt correctly. But this public parameter z is going to be a commitment of all these ciphertexts. Notice that this is the first instance where we are getting something that would solve both our problems why is that because the first statement gives you a majority of systems that are being bound which is what verifiability needed. But the second system says that you have a majority systems free only 2 of them are going to be used in the navy proof and the other rest 3 are all free to change as you wish but somehow they are going to be used as a link with those public parameters similarly in order to prove in order to generate a function secret key we generate 5 function secret keys and we first show that the navy either shows that all 5 are correct function secret keys for the function f or 4 of them are correct with the additional constraint that let's look at the ciphertext that were inside this public parameter z that have been committed inside and when you decrypt each of this underlying ciphertext with each of these function keys that you generate now all these decryptions should give out the same value. So, why is this going to be useful let's try proving indistinguishability security first we initially start with an encryption of x we initially start with an encryption of x where all 5 systems have x and we just prove that the first 4 encrypt x are correctly. Now we switch the function secret key to not prove that all 5 are function keys for f but rather just 4 are keys for f and that when you decrypt these ciphertext in the public parameter they all decrypt to the same value notice that initially the public parameter has just garbage it's not a commitment to any ciphertext. We now switch the encryption to prove the trapdoor statement where we set the public parameter to be commitment of all the ciphertext that we generate and we prove that the first 2 indices are generated correctly as an encryption of x. Now notice that we have the last 3 systems free in the sense that they are not bound to be proven correct and we can change each of them from being an encryption of x to an encryption of y and if you look at the function keys the decryption would still satisfy because f of x is still equal to f of y so we switch all the last 3 indices to be an encryption of y now and not just an encryption of x and this is the crucial point in the proof where we now say that because of this trapdoor statement we have a majority systems that are free to be switched and we have switched all these systems from an encryption of x to an encryption of y. The rest of the proof is fairly straightforward we switch now the last 2 indices to be an encryption of y and not the first 2 and we have the first 2 systems free we can switch both of these to be an encryption of y now and this turns out to be a valid encryption of y. What about verifiability? Does this system already give us verifiability? If these 2 if the function key was generated using the trapdoor statement and the encryption was generated using the trapdoor statement then by the guarantee of the decryption we know that all of them would be decrypt to f of x itself so we are happy there. But how about this scenario where the function keys are generated using the correct statement that alpha are indeed keys for f but we use the trapdoor statement for generating the ciphertext. Now notice that only 2 of these indices are generated correctly only 2 of these ciphertext indices are generated correctly and this commitment is essentially useless because it says nothing at all on the right side and we can look at this commitment as applying no role at all in this setting therefore only 2 only the first 2 indices that were used to prove correctness of encryption would decrypt correctly to give f of x while the last 3 indices would give you some garbage value so now you have majority of garbage values and you cannot do anything with this. It turns out that this is easy to fix we just introduce another lock and we say that these 2 scenarios can never co-occur and therefore this is the only glitch in proving verifiability and if they can never co-occur then the whole construction is trivially verifiable. I would like to point you to the paper if you have more questions if you want to know more about the techniques thank you. We've probably got time for one quick question then let's thank Sikrishna and all the speakers in the session again.