 Hi everyone, my name is Jiaxin Guan, and in this video I'll be talking about our work Disappearing Cryptography in the Boundless Storage Model. This is a joint work with Mark Zendry, both of us are from Princeton University, and NTT Research. As you can tell, our title consists of two parts, the first part being Disappearing Cryptography. So what do we mean by Disappearing Cryptography, does it mean that the crypto just disappeared? Not really. So let me show you what we mean through a series of stories. Let us invite out our favorite friends, Alice and Bob. So suppose Alice is sending an encrypted message to Bob, and there comes our eavesdropper Eve, who is listening on the channel and intercepts the message. So what Eve can do is that he can make a copy of the message, and at a later time Eve could, through social engineering or interrogation, extracts the secret key from Bob, and then use that secret key to decrypt the message, and then maybe retrieve a cube picture of it. So what we want is that even if Eve is able to get the secret key in the future, we want the previously transmitted messages to be still secured. This is what's referred to as forward secrecy, essentially meaning that a leaked secret key in the future should not compromise the security of the messages that has been transmitted before that. Let's consider the following scenario. So again, Alice is sending over some encrypted messages to Bob, and here comes our eavesdropper Eve, who makes a copy of the cybertext. So here, assume that through some magic, which we don't care about how it happens or how it's implemented, that the cybertext just magically disappears. So then at a later time, Eve is able to get the private key from Bob, but then there's really nothing that he can do, because there's not even a cybertext for him to decrypt. So what we have seen on this page actually trivially fulfills what we call receiver-deniable encryption. So what is receiver-deniable encryption? Well, it is a public encryption scheme where Alice sends over a whole bunch of messages to Bob, and then there's the eavesdropper Eve, who makes a copy of these cybertext. And at a later time, Bob is required to reveal a private key to Eve, which Eve will use to decrypt the recorded cybertext. The scheme is called receiver-deniable, if Bob is able to provide a fake private key, which will lead the cybertext to decrypt to fake messages. Notice that if the cybertext disappears, then this is this vacuously holds, because the eavesdropper Eve cannot record any cybertext. Now let's consider a different story. So it's Friday night, and Alice is holding a party at her house, and she wants to invite Bob over. So Alice sends over a message saying there's a party happening tonight, and in order to convince Bob that the message indeed comes from Alice, Alice also attaches a signature on the message. And here comes our annoying eavesdropper Eve, who makes a copy of the message together with the signature. But Eve doesn't do anything else malicious now yet. But then at a later time, say it's Tuesday night, and Eve will simply replay the message together with the signature to Bob. So from Bob's point of view, he receives a message saying there's a party tonight, and the signature is indeed Alice's value signature on the message, leading Bob to believe that there is another party going on Tuesday night at Alice's house. So Bob went over to Alice's place, only to find there's actually no party going on. So in order to prevent these sort of replay attacks in the standard model, we would either require the receiver, in Bob in this case, to keep a state, or we would require interaction between Alice and Bob for the signature protocol. But now let's consider a hypothetical scenario. Again, Alice sends over some message together with a signature to Bob, and eavesdropper makes a copy of the message together with the signature. Now guess what? Yeah, the signature just magically disappears. Then the eavesdropper maybe attempt to replay the message to Bob, but the eavesdropper doesn't have the value signature for it. So it can never lead Bob to believe that the message being replayed is actually sent from Alice again. OK, so now let's consider the last story. So there is a software company who sends over a program P to its users. And the user has in his mind a couple of inputs, X1, X2, and so on, that it wants to evaluate the program on. So when the program is being sent, then the user can simply run the program on X1, X2, and obtain the corresponding results. But then guess what? Yeah, magically the program disappears. So at a later time, the user might have another input, X prime, that it wants to evaluate the program on. But now that the program is gone, there is no way for the user to obtain P of X prime. Notice that this can be easily turning to a software subscription system where the user loses access to the program once the subscription expires. Here, we don't need to worry about the user storing the program because the program will disappear once after the transmission ends. So now you probably are asking, how indeed do we have these disappearing ciphertext, disappearing signatures, and disappearing programs? Indeed, that is not possible in the standard model because you can simply just write down what is being sent. That's why we rely on the bounded storage model, which is first put forward by Yuli Mora in 1992. So traditionally, when we talk about an adversary in cryptography, the adversary is bounded by time. Namely, the adversary needs to perform the attack within time that is polynomial in the security parameter n. However, in the bounded storage model, the adversary can take as long as it wants to finish the attack, but the adversary is bounded by the amount of storage that it uses. Namely, we will require the adversary to use the memory of at most p of n for some fixed polynomial p. Now, how would this help us in constructing disappearing cryptography? Well, we can imagine that, aside for example, in the case of disappearing ciphertext, the ciphertext being transmitted is so large that it exceeds the adversary's storage bound. Then, for an adversary that is bounded by space, the ciphertext cannot be written down and just effectively disappears after the transmission. So in our work, we initiate a study of disappearing cryptography in the bounded storage model. We investigate four different schemes. So the first is disappearing public key encryption, as we have seen in the first story. We further extend it to a disappearing functional encryption scheme. And then we study disappearing signatures, which corresponds to the second story. And lastly, we define online obfuscation in a bounded storage model, which corresponds to the last story where we have the disappearing program. In fact, we show that online obfuscation can be a really useful tool by showing how to construct the prior three schemes using online obfuscation as a component. Lastly, we have also given two candidate constructions for online obfuscation. The first construction uses matrix branching programs, which is a common technique used for constructing indistinguishability obfuscation in the standard model. And the second candidate construction comes from time stamping schemes in a bounded storage model, which is due to Moran, Chateau, and Tajma. We want to point out that we were not able to get full security proof of these candidate constructions, and that's why we leave them as interesting open problems. So in this video, we will first of all talk about how we define online obfuscation, and then we will show how we use it to construct disappearing public encryption. And lastly, how it can be used to construct disappearing signatures. So moving on, we'll start with online obfuscation. For defining online obfuscation, we imagine the following syntax. So there is an obfuscator that takes as input a circuit C and outputs an obfuscated program P, which is so large that the adversary cannot possibly write it down in its entirety. However, this large program P can be sent out in a streaming manner, bit by bit, so that the honest parties, which also has limited amount of storage, can indeed run the program on its input in an online manner. Notice that in this talk, we will use this background blue color to indicate that a message or a variable is a stream. And then there is the corresponding eval method, which takes as input the obfuscated program P, which is a stream, together with an input X, and will output the Y, which is by evaluating the program P in an online manner on the input X. So we require correctness, which essentially says that the result of evaluating the obfuscated program P on input X is indeed the same as evaluating the original circuit on the input X. So to define security, we consider two different experiments. So in the first experiment, the challenger interacts with an adversary. And in the second experiment, the challenger interacts with a simulator. So both experiments consist of an arbitrary number of rounds. And in each round, either one of these two cases happens. So each round is either an interaction round or what we call a streaming round. So the interaction rounds is the same for both experiments. So in interaction rounds, the challenger interacts with the adversary or the simulator arbitrarily. There is no limitation on how they communicate whatsoever. And in the streaming round, it's slightly different. So in the streaming round for adversary and challenger game, the adversary gets a fresh stream of an obfuscated program P. And the challenger is notified that a streaming has happened. However, for the simulator, it doesn't have access to the streaming of obfuscated program P. But instead, it's allowed to make adaptive queries to the original circuit C. And it's only allowed to total number of queries is bounded by poly of lambda, where lambda is the security parameter. Again, similarly, the challenger is notified that a streaming has happened. Notice that these rounds do not need to happen in order. So it can be interaction round first, and then followed by some streaming rounds, and then some interaction rounds again, and then some more streaming rounds. They don't need to happen in the specific ordering of interaction round and then streaming round. However, we do bound the total number of streaming rounds to be at most K. So if ever the challenger sees that there is more than K streaming rounds, the challenger will afford the experiment and just output 0. Additionally, at any time of the experiment, the challenger can terminate the experiment by outputting a single bit 0 or 1. So using these two experiments, we're able to define security that we want. The main security definition that we will use for the applications is similar to a virtual great box security. So what is safe for K time verb VGB security is that for any challenger and any adversary that has a space bound S, there exists a computationally unbounded simulator such that the challenger cannot tell whether it's interacting with the adversary, which has access to the obfuscated program P, or it's interacting with the simulator, which only have adaptive queries to the circuit C in the streaming rounds. So if you're watching the video, maybe take a minute to pause and think through this definition. Similarly, we can also define different flavors of obfuscation. We can also define an indistinguishability obfuscation here, where we will change the total number of adaptive queries to the circuit C. So now instead of them being poly lambda, now they can be super poly lambda for the simulator in the streaming rounds. And similarly, we can also define a VBB security, virtual black box security, where the simulator is no longer computationally unbounded, but it is now a PPT simulator. In fact, we have also shown that just like VBB obfuscation is impossible in the standard model, we have also shown that VBB obfuscation is also impossible in the bounded storage model. Now with the definition of the VGB security of an online obfuscator, now we can proceed to see how we define a disappearing public encryption and how we are able to construct it using an online obfuscator. Now let's consider the security definition of a public encryption scheme or actually one of the security definitions of a public encryption scheme in the standard model. So there's the challenger, which first of all samples a public key secret key pair together with a random bit B. And then the challenger is going to send over the public key to the adversary. And the adversary sends over a pair of challenge messages M0 and M1. And the challenger will encrypt a random one according to B. And then the adversary needs to make a guess for the bit B, essentially B prime here. So the adversary wins the game if B prime is equal to B. So our definition for the disappearing public key encryption only involves two minor changes to this definition. So first of all, the encryption now, the cipher text is going to disappear, which means it's now a stream, which is too large for the adversary to write down. And then secondly, before the adversary makes the guess for the bit B, the adversary is actually given the secret key of the encryption. Notice that if in the standard model, the adversary is given the secret key, then it can trivially just decrypt the message to learn what the bit B is. But here because we're in the bounded storage model and the cipher text has effectively disappeared even with the secret key, there's nothing for the adversary to decrypt. So let's take a moment to see why this security definition makes sense. So to construct a disappearing public key encryption scheme as we have seen on the previous page, we require one additional tool other than online obfuscation. We require lossy functions, which is a subset of lossy trapdoor functions, which was first put forward by Piker and Waters in 2008. Here we no longer need the trapdoor feature. So what we really only need is lossy functions. So what are lossy functions? Well, there are a class of functions which you can sample from in two different modes. So in the injective mode, we can sample a injective function from the domain to the range. And there's another lossy mode, which we can sample a lossy function from the same domain, but to a much, much smaller range. So there's guaranteed to be collisions. And the lossy functions require that whether a function is sampled in the injective mode or a function is in the lossy mode are computationally indistinguishable. So with this in mind, we now proceed to see how we construct the public encryption scheme. So here's our construction. To sample a public key, private key pair, Alice will first of all sample an injective function from the lossy function using the injective mode together with a uniformly random private key SK. So what is a public key, you might ask? Well, the public key has two parts. The first part is simply the image of SK using the injective function. And the second part is just the injective function itself. So let's say the image of the injective function on SK is called Y. And the public key is just Y and the injective function. Then suppose Bob has a message M that it wants to encrypt using the public key. So what Bob will do as the first step of the encryption procedure is to create the following program. So the program P is gonna take an input X and checks if the image of X on the function on the injective function is equal to Y. If that is the case, the program simply outputs the message M. Otherwise, the program outputs none. And the ciphertext is just in the obfuscated version of the program P. Notice that this obfuscated program is a long stream which is too large for the app to write down and it sends over to Alice in a bit-to-bit streaming manner. On Alice's end, to decrypt the message, to decrypt the ciphertext, Alice will simply evaluate the streamed program by using the secret key SK as input. And as we can see, if the input to the program P X is equal to SK, then we're in the first case and the program will simply output M as desired. It is easy to verify that if the obfuscator is has correctness, then this scheme is secure. We will show security of our construction through a sequence of hybrids. So now let's look at the, let's plug our construction into the original security experiment. So our challenger is gonna do first sample an injective function from the lossy function together with a uniform private key SK together with a random uniform bit B. The challenger is going to send over the public key consisting of these two parts. And then the adversary is going to send over the challenge messages M0 and M1. And the challenger is going to pick a random one of them to encrypt and to encrypt Mb, it will first construct the following program as we have seen and sends over the obfuscator version of the program back to the adversary. Additionally, at a later time, the challenger will also send over the private key SK and then the adversary needs to make a guess for the bit B prime. So in our first hybrid, notice that since the function is an injective function, so instead of checking whether the image of X is equal to the image of SK, we can directly check if X is equal to SK simply because the function is an injective one. So we make this modification in the obfuscator, in the program being obfuscated and because the functionality is exactly the same. So using the IO security, which is implied by the BGB security of the online obfuscator, these two experiments should be indistinguishable. In the next hybrid, instead of sampling in the injective mode of the lossy function, we now sample in the lossy mode of the lossy function. And then just by the security of the lossy function itself, no PPT adversaries should be able to distinguish whether we're sampling in lossy mode or in injective mode. Lastly, we change the cipher test program again. This time we change it so that the cipher test program never outputs M. Notice that this only affects the program's behavior on a single point SK, but notice that since now this function F is a lossy function, the SK is actually statistically hidden from the adversary who only knows PK when a cipher text is being streamed. This means that the adversaries will not be able to query the obfuscator program on the input SK, and therefore the adversary is not able to detect the change made in this hybrid. Notice that although the adversary is later on given SK in the plane, but at that time, the adversary will be no longer to query the obfuscator program because that stream has already happened and the program has disappeared. Therefore, through a sequence of hybrids, we have shown that assuming the existence of lossy functions and online obfuscation with virtual grade box security, no PPT adversary with a space bound can win the disappearing public key encryption security game. Next up, let us move on to disappearing signatures. Let us begin by first looking at how we define security for the disappearing signature scheme. So it's quite similar to the unfortability security of a standard model signature scheme. So let's recall what that is. So in a standard model signature scheme, the security is defined in a following way. So the challenger samples the public key secret key pair. It's gonna be used for the signature and sends over the public key to the adversary. And the adversary sends over a message query MMI and will receive a corresponding signature on that message using the public key. And the adversary can repeat the process over and over by obtaining as many messages and message and signature pairs as it wants. But then at a later time, the adversary needs to produce another message signature pair where the message and prime is not one of the messages that has been queried before. And the adversary wins if the message signature pair and outputs is a valid message signature pair. So how will we modify it for the disappearing signature scheme? Well, first of all, the signatures themselves are now gonna be streams, a long stream that is so large that the adversary cannot possibly store. So now notice that although the adversary received these signatures, but it can only see the signatures as they are streaming but they can never write these signatures down. So when the adversary submits the message signature pair M prime and sigma prime, notice that sigma prime is now also a stream. We no longer require that the message M prime is unique from any of the previously query messages. Actually now M prime can be any message even if it is one of the messages that has been queried before. Notice that in the standard model, this is not possible because the adversary can simply just replay one of the previous signatures. But in the bounded storage model, the previous signatures is a stream and it cannot be written down. So if the adversary is going to use an old message, but he still needs to come up with a fresh signature because the old signature cannot be stored and replayed at a later time. So with that definition in mind, let's look at our construction. So for our construction, we need one additional tool which is prefix puncturable signature which was first put forward by Belair and Fuchs-Bauer in 2014. So a prefix puncturable signature scheme is a signature scheme where the message space looks like this. So instead of just having a message, there is an additional prefix X which is attached to the beginning of the message. The key generation, the signature and the verification procedures have the exact same syntax as what you would expect from a normal signature scheme. However, there is one additional procedure which is called puncture which takes as input the private key SK and a prefix X star and gives you a punctured key SK sub X star. So what this punctured private key allows you to do is that it allows you to sign any message except once that has a prefix X star. So you can use it to sign a message that is say, for example, X1M, X2M, whatever, but you cannot use it to sign a message which is X star M. So for this punctured private key will require strong correctness, essentially saying that the signature you get by using this punctured key any message that has a prefix different from X star is exactly the same as what you will get from the original private key. Notice that this is a strengthening of the original result by Belair and Fuchsbauer in 2014. Originally the only required correctness is saying that the signature produced by the punctured key verifies to be a valid signature, but it does not require that the signature to be exactly the same as you will get from the original private key. But for the, for our application we will actually need this strong correctness and we show in the paper how to strengthen their result to get this strong correctness required. Additionally, it will require punctured key security which says that for any, there is no PBT adversary that takes as input a punctured key together with the public key and the prefix that the key is punctured on and produces a puncture, produces a valid signature on a message which starts with the prefix X star. This is essentially saying that you cannot use the punctured key to sign a message where the prefix was punctured. So let's jump into our construction. So for our construction it works as photos. So there is the public key secret key pair is just simply generated by using the prefix puncturable signature scheme. It's just the public key and secret key you will get from the prefix puncturable signature scheme. And then Alice is just gonna go into send over the public key to Bob and when Alice tries to sign a message what Alice will do is to construct a following program. So the program will embed the private key SK and the message M. And essentially the program will use the input as a prefix and sign the message attached with the prefix using the embedded private key SK. And then Alice will send over the message together with a signature which is now a stream which is the obfuscated version of the program obviously over to Bob. And for Bob to verify the signature Bob will first of all sample a random prefix X star and then evaluate the streamed obfuscated program using eval in an online manner on the input X star which is the random prefix that Bob has just picked and correspondingly received a signature sigma star. And the output of Bob is just by running the verification algorithm of the prefix puncturable signature scheme using the public key together with the message where the message is said to be the original message M attached with the prefix X star in the front. Notice that the correctness is easily implied by the correctness of the prefix puncturable signature scheme in the construction. So now let's look at how this construction fulfills our unforgeability security requirement. So let us first plug our construction into the original experiment. So the challenger will first of all sample the public key secretive pair using the prefix puncturable signature scheme and then send over the public key to the adversary and the adversary queries a whole bunch of messages and to answer these queries the challenger will create the following program which is essentially just the signing procedure of the prefix puncturable signature scheme and then sends over the signatures which are now obfuscated programs which are streams over to the adversary and the adversary of course queries as many messages as it wants and in the end the adversary outputs a message signature pair for any message. And notice that to verify the adversary's output the challenger will sample, first of all sample a random prefix X star and then evaluate the obfuscated program on the input X star and obtain a short signature sigma prime, oh sorry, sigma star and the output of the experiment is simply by running the verification of the prefix puncturable signature scheme on X star M. Again, we will show the security using a sequence of hybrids. So in the first hybrid, notice that the challenger needs to sample this random prefix X star but now instead of sampling it after receiving the message signature pair from the adversary, we sample it at the very beginning of the experiment right after we sample the public key secret key pair and right after we sample the prefix X star we also sample a punctured private key SK sub X star but we don't use this punctured key anywhere. So notice that we feel we're just reordering stuff and we're not using the punctured key. So this hybrid should be indistinguishable from the adversary. And then next step, we will gradually modify the obfuscated programs starting from the last query by the adversary all the way back to the first query. So how do we modify these programs? We'll modify these programs so that they reject on the prefix X star but now notice that since we reject on the prefix X star we only need to sign the messages which has a prefix different from X star so we can easily sign these using the punctured private key SK sub X star. So instead of embedding the original private key SK in the program, we now embed the punctured private key SK sub X star in these programs. Notice that this step is indistinguishable from the adversary because X star is still information theoretically hidden from the adversary because it's not used anywhere else. But now in this scenario, notice that entire view of the adversary can be simulated using only the private using only the punctured private key SK sub X star. But at the end, the adversary is able to come up with a valid signature with the prefix X star which violates the prefix puncturable signature security requirement. Therefore, we have shown that if we assume online obfuscation with BGB security together with prefix puncturable signatures then no PPT adversary with a space bound can break our disappearing signature security gain. To conclude, we initiated a study of disappearing cryptography in a bounded storage model by investigating these four different schemes corresponding to disappearing ciphertext, disappearing signatures and disappearing programs as we have seen in the examples at the beginning of the talk. Usually when the way people use the bounded storage model is to use it to prove things information theoretically. However, here by combining the bounded storage model with computational assumptions we're able to achieve never before possible results. For example, in the disappearing PKE scheme, we achieve a form of forward secrecy which is only possible in the standard model if we update the private keys. But updating the keys can be undesirable in many circumstances. Secondly, we also initiated the study of obfuscation in the bounded storage model. Just as standard model obfuscation has proven to be a central tool in the study of standard model cryptography, our work demonstrates that online obfuscation in the bounded storage model is analogously a central tool in the study of disappearing cryptography. And just as standard model obfuscation schemes started out as conjectures we hope that future work will improve the status of our candidate constructions or come up with a proof, a full security proof of our candidate constructions. We believe these are interesting open questions to explore. Lastly, thank you for your time. And here is a link to our ePrint version of the paper. You can give it a read if you're interested and contact me if you have any questions. Thank you for your time.