 Hello everyone, my name is Jiaxin Guan and in this video, I'll be talking about our work incompressible cryptography. This is a joint work with Daniel Wicks and Mark Zendry. So to begin our talk, we'll take a look at several motivating examples. So let's say we have our favorite friends Alice and Bob and they're not doing something crazy. They're just doing a very simple protocol. So what Alice does is sends over an encrypted document over to Bob and here comes our eavesdropper Eve, who doesn't do anything super malicious. What Eve does is simply eavesdrops on the channel and makes a copy of the encrypted document or the cipher text. And then at a later point of time, maybe a month later or a year later or maybe 20 years later, Eve is somehow able to get hold of the private key used for the conversation. So Eve is able to do this maybe through a security breach or maybe through some social engineering or maybe even by just going directly and interrogating Bob. Anyhow, Eve gets hold of the private key and then what you can do is to simply use this valid private key to decrypt the stored cipher text and therefore is able to retrieve the original message sent by Alice. So to prevent against this, we will need something we call forward secrecy. Essentially saying that a private key leaked in the future should not compromise the security of the messages that are sent before. However, to achieve forward secrecy, we would either need the protocol to contain multiple rounds or we will require Alice and Bob to be constantly updating their keys. Both of this could be undesirable under many scenarios. Similarly, consider this scenario, Alice again is talking over to Bob by sending over some encrypted document. And comes Eve dropper Eve who makes a copy of the encrypted document. But then at a later time when Eve is asking Bob, let's say Eve actually interrogates Bob in this case and interrogates Bob for the key, what Bob can do is instead of providing the actual private key used in the conversation, Bob is able to provide a fake private key which nevertheless causes the message to decrypt, causes the cipher text to decrypt. But this time to a fake message, let's say to a cute picture of a cat instead of the super secret message before. But Eve has no way of telling whether Bob is providing the actual private key or a fake private key. So this is what we call receiver-deniable encryption. And in fact, it has been shown that receiver-deniable encryption is impossible in the standard model. In the last example, let's say Alice is holding a party at her house on a Saturday night and is sending over a party invitation over to Bob. But in order to convince Bob that this invitation actually comes from Alice, what Alice does is attach a signature on the message stating that there is a party tonight. And again comes our Eve dropper Eve who Eve drops on the channel and makes a copy of the message together with the signature. And then at a later point of time, say it's Tuesday night where everybody is busy working or resting, what Eve does is simply replay the message together with the signature over to Bob. From Bob's point of view, he receives a message from Alice saying there's a party on Tuesday night and there is in fact a valid signature on the message leading Bob to believe there is actually a party and going to Alice's place only to find out there's no party at all. So this is what we call a replay attack for signatures. And it turns out that to prevent against replay attacks, we would either need to make the protocol multiple rounds, therefore requiring interaction between the two parties, or we will require Alice and Bob to maintain a synchronized clock, or we will require that Bob here to verify to maintain a state. Again, all of these three scenarios could be undesirable in many cases. So notice that in all of the three motivating examples before, the adversary needs to store the ciphertext or the signature. So here's the natural question. What if the adversary cannot simply store the ciphertext or the signature? What will happen in these cases? So in the first two examples, Alice sends over an encrypted document over to Bob and comes to Eve's drop for Eve, who tries to make a copy of the encrypted document. But let's say Eve cannot store the actual encrypted document. And then at a later time, remember Eve is going to interrogate Bob for a private key. But now with the private key, there's actually no message for the eavesdropper to decrypt. So there's nothing that the eavesdropper can do even with the actual private key. And therefore, we trivially have forward secrecy. In the meantime, notice that there is even no message for the eavesdropper to decrypt. So this scheme is also vacuously receiver-deniable. In the last example, Alice sends over a message together with a valid signature over to Bob and the eavesdropper tries to make a copy of the message together with the signature. Let's say the eavesdropper cannot store the actual signature, but only the message. And then at a later time, when Eve tries to produce the replay attack by resending the message and the signature, it can only send a message without the signature, and therefore it fails to convince Bob that the message actually comes from Alice. So how exactly do we achieve that the eavesdropper cannot store the cipher text or the signatures? Well, the key idea is that let's say we're encrypting a very small message, and the key idea here is that we manually blow up the small message into a huge cipher text or a signature so that its size is too large for the eavesdropper to store. In fact, this idea was used in the previous paper by Guan and Xenjie called Disappearing Cryptography in the Bounded Storage Model. So that paper is set in the Bounded Storage Model, which allows the paper to restrict the eavesdropper's ability to store the cipher text or the signatures. So what is the Bounded Storage Model? Well, traditionally in cryptography, when we talk about an adversary, we imagine that the adversary is time bounded. So the adversary needs to perform the attack within a polynomial amount of time in the script parameter n. However, in the Bounded Storage Model, which was first put forward by Yudi Mara in 1992, the adversary can take as long as it wants. We don't impose any time limits on the adversary. However, we found the amount of memory bits that the adversary can use throughout the entire attack. So throughout the entire attack, the adversary can only use up to p of n memory bits, where p is a fixed polynomial on the script parameter n. Concretely, we can imagine p being something like n squared, where the adversary needs to perform the attack within space less than n squared, whereas the protocol is still secure for honest parties, which only uses some memory around the level of n. So there's an inherited quadratic gap between the adversary space and the honest user space. So the disappearing cryptography paper defines a disappearing public encryption, which is defined in the following way. So this is a game played between an adversary and a challenger. It is very much alike the Stenry Model PKE game. So the challenger is going to sample a public key, private key pair, together with a challenge bit B, and sends over the public key over to the adversary, and the adversary sends over to challenge messages M0 and M1, and receives back an encryption of a random message, depending on the challenge bit B. In the end, the adversary needs to make a guess for the bit B, and the adversary wins if the guess is correct. So this is just a Stenry Model PKE game, and we'll take a look at how it is adapted into a disappearing PKE security game. So first of all, since we're moving into the boundary storage model, we require that the adversary maintains a space less than or equal to S for a fixed memory bound S. Notice that this space constraint is in addition to the computational assumptions of the adversary. So disappearing cryptography paper assumes that the adversary is both space bounded and time bounded. So here the adversary is bounded by space S, and also bounded by a polynomial running time. So with this bounded space, what we can do is we imagine that the ciphertexts are so large that they exceed the adversary's storage bound. And then after the adversary receives the ciphertext, all we can actually do is send over the private key over to the adversary before the adversary makes its final guess. This is actually okay because if you think about our motivating examples, even if the adversary is given the private key, there is nothing that the adversary can do because it cannot store the ciphertexts. So a key difference between this work, incompressible cryptography versus this previous work, disappearing cryptography, is actually how the memory bound is imposed on the adversary. Imagine the following two adversaries. So adversary one, when Alice is sending over the ciphertext over to Bob, would just write down all of the bits being sent by Alice. And then after the transmission has ended, computes a function with a small output, which then essentially allows the adversary to compress the ciphertexts down to a smaller state. And then the adversary just maintains that smaller state throughout the entire time until a security breach happens. On the right hand side, we can imagine adversary two, which simply writes down the bits as the transmission goes on and then just maintains that amount of memory throughout the entire time. Now if we were to set the memory bound to be somewhere between the highest point of adversary one and the memory used by adversary two, then the adversary one on the left is actually not captured by the bounded storage model with the bound s because the bounded storage model required the adversary to be space bounded throughout the entire game. However, we think that the adversary one on the left is actually a quite valid attack, because if you think about it, having a high amount of memory in a very short amount of time is actually not that intimidating. It's actually quite achievable. However, requiring to maintain a moderate amount of memory throughout the extended period of time, this seems more challenging than the previous one. So as a real life example, for example, if you ask me to store 10 terabytes of data for only 10 minutes, well, I'm happy to do that because I personally have 10 terabytes of storage at home and I'm happy to devote that 10 terabytes only for 10 minutes. However, if you ask me to store say two terabytes of data for 20 years, that seems much more intimidating because I need to maintain that two terabytes throughout the 20 years, this extended period of time, and who knows what's going to happen. I mean, personally, I might be moving and then just throw it at hard disk and I will fail the game. So essentially what we want to want here is that we want to capture these adversaries that uses a large chunk of memory when the transmission is going on, but then compresses down to a smaller state for long-term storage and hence are naming incompressible cryptography. So now finally, we're able to get back to this work, incompressible cryptography. So we'll take a look at how we actually define incompressible public encryption, how it's different from disappearing public encryption. Here on this page, we'll produce the disappearing PKE security game and we'll take a look at how we would modify it to become the incompressible PKE security game. So the first thing that we do is that we're going to remove the storage bound for the adversary. So the adversary is no longer bounded by the space as throughout the entire game. Now instead, we would use a pair of adversaries which cannot communicate between them. So there is no private channel between the adversaries and the first adversary gets the public key, sends over the challenge messages and receives the ciphertext, whereas the second adversary receives the private key and makes the actual guess. So after the first adversary has received the ciphertext, the first adversary needs to compress it down to a state st such that the size of the state is within the memory bound s. And then when the second adversary is provided a private key, it is also given that state together with the public key of the encryption of the whole scheme, together with some auxiliary input. So what is this auxiliary input is sent over by the first adversary at the beginning of the experiment, and you can think about it as containing the shared randomness for the two adversaries. If the adversaries are not in uniform, then yeah, we can do this without the auxiliary input at all. So notice that this game actually captures the idea that the adversary received the ciphertext and then can do whatever it wants using however much space it wants. But in the end, it needs to compress down to a state st that is used for long term storage. In fact, this definition, this incompressible PKE security definition, the incompressible PKE security actually implies a disappearing PKE security. Notice that it is only security-wise, we will mention in the next few slides functionality-wise they're still different. Okay. So in our work, we show two constructions of an incompressible encryption, and in fact, they have some additional features other than the security requirement on the previous slide. One of the features is called low-space streaming. So recall that in the disappearing cryptography, it is set in the bounded storage model. And remember that in the bounded storage model, we require that the honest parties, they should be able to run the protocol with a memory bound somewhat lower than the adversaries. However, when we move over to the incompressible encryption, we're moving away from the bounded storage model. So in incompressible encryption scheme, the honest parties, to run the honest parties, you would actually could require high space. Therefore, a incompressible encryption scheme might not be a disappearing encryption scheme because the honest parties, they will require high space. However, we show that one of our constructions actually supports this low-space streaming feature, such that the encryption and decryption, together with signature and verify for an incompressible signature scheme, they'll actually be able to run in a low space. And therefore, if we have an incompressible public encryption scheme, the security of which already implies the security of a disappearing PKE, if we add on a low-space streaming feature to it, and then we will simply get a disappearing PKE scheme. Another feature that we have is called rate one incompressible encryption. So what is the rate? So the rate is defined as the size of the message over the size of the ciphertext. So for a low-rate PKE, we will take a very small message and we will blow it up to a huge ciphertext which exceeds the adversary storage bound. Now let's think about the other way around. Let's say we fix the ciphertext size to be somewhat larger than the adversary storage bound, then ask ourselves how large of a message can we encrypt using that ciphertext size. In fact, that we show that one of our constructions achieves an optimal rate of one, which allows us to encrypt a message almost as large as the ciphertext size. So back then, when I was talking about a key idea, I said the idea is to manually blow the ciphertext order signature size. Well, I was kind of lying there because for rate one setting, we're not really blowing up the size. The size is still the same, but really what you're doing is you're blowing up the entropy because the message themselves could be a very low entropy. All right, to quickly summarize what we did in this paper, incompressible cryptography. So first of all, we define incompressible pke and also incompressible signatures. And we show that a low rate incompressible pke can be constructed from a standard model pke, and we show that low rate incompressible signatures can be constructed from simply one-way functions. Notice that both of these constructions support low space streaming, meaning that the honest parties can run within a very low space. On the other hand, we also construct incompressible pke and incompressible signatures with an optimal rate of one. We construct rate one incompressible pke from indistinguishability obfuscation, or IO, and we show that incompressible signatures with a rate of one is actually equivalent to incompressible encodings, which was first put forward by Moran Wicks in 2020. They show that incompressible encodings can be constructed by assuming either the learning with errors or decisional composite residualosity, and it's set in either the random oracle or the common reference stream model. So if you're interested in any of these constructions or the security proofs, feel free to take a look at our ePrint version of the paper. There is a link on the next page. But I guess for today's talk, the takeaway message I want to give is that throughout a long time, people have been using the bounded storage model to prove things information theoretically. However, in this work, and together with the previous work of disappearing cryptography, we actually show that by combining space constraints and time constraints on the adversary, we're able to achieve never before possible results. So that's one thing I would want to encourage you folks to think about is to think about what other new possibilities are there if we impose both time and space constraints on the adversary. Thank you for your time, and let me know if you have any questions. Thank you.