 Hi, I'm Thomas Aksekota and I will present the paper on pseudoranema encodings. This is a joint work with Shafraw Kutur, Yuval Eshai, Stanislav Czaweki and Amit Sahal. Let's first start by recalling the notion of compression. Compression aims to encode information with less bits than its original representation requires. In information theory, information is usually modeled as samples from some probability distribution, X, and the goal of compression is to encode a sequence of samples from X into a short string such that it can still be efficiently recovered. There is a rich body of literature on compression in the field of information theory. Later, Goldberg and Zipsa initiate the study of compression in the field of complexity theory. Complexity theory doesn't consider compressing sequences of samples, but only compressing a single sample. From a high level, compression can be viewed as a generalization of two quite fundamental notions in complexity theory. On one hand, we have resource-bounded Kolmogorov complexity. The resource-bounded Kolmogorov complexity of a string is the length of the shortest efficient program which produces that string. So this shortest program can be seen as a kind of a compression where decoding is efficiently possible. On the other hand, we have randomness condensers. Randomness condensers are maps that map a distribution into a distribution with a higher entropy rate, and hands can be seen as an efficient compression algorithm where decompression is not necessarily efficiently possible. Perfect compression encodes the distribution into the uniform distribution because otherwise one could have compressed further, so that's what we mean by perfect compression. So perfect compression allows efficient encoding and efficient decoding. So basically, it can be seen as unifying the core properties of the above two notions. Okay, let's have a look at the usefulness of compression in cryptography. Consider the scenario where we have Alice and Bob, and they want to confidentially send the message, but they only share a low entropy secret key, for instance, a password. Now, simply encrypting with this password allows a proof-force attack. Namely, one can simply try all, most likely or even all, keys until the encryption gives a ciphertext, the awful ciphertext gives a plain text that looks valid in some sense, for instance, that looks like natural speech. A solution to avoid this was found by Belloween and Merritt in 1992 using perfect compression, namely first perfectly compress the plain text and then encrypt. Belloween and Merritt used that in a password-assenticated key exchange. Like this, the encryption with the wrong key will yield a uniform string, and the encryption with the actually used key will also yield a uniformly random string, so the proof-force attack doesn't work anymore. However, perfect compression algorithms are only known for a very small number of distributions. That's why we relaxed compression was a goal to include the product class of distributions. And yeah, we relaxed compression in several dimensions, namely we dropped the requirement for shorter outputs and we allowed the encoding algorithm to be randomized. This differs to basically correspond to an approach of honey encryption, an encryption notion which is resistant against proof-force attacks similarly to the motivating example. Further, we relaxed the indistinguishability requirement from information theoretic indistinguishability to only computational indistinguishability and also we allow for a trusted setup in our case a common reference string. Okay, here's a short preview of the contributions. First, we derive negative results for many flavors of pseudo-random encodings and also a positive result from indistinguishability obfuscation for a weaker notion. We prove an equivalence between pseudo-random encodings and a notion called invertible sampling, which was studied in the area of multiparty computation. Also, we derive quite a number of applications from pseudo-random encodings and these applications, they come from seemingly unrelated fields in photography and we feel identifying these connections as the main contribution. In this presentation, we will focus on the hypothesis that every efficiently sampled distribution can be pseudo-random encoded and that includes distributions which may depend on some input M. Our relaxations of compression basically yield for basic notions where the deterministic notions are stronger than the randomized ones and the information theoretic notions are stronger than the computational ones. We also consider notions with common reference string where the input M of the distribution may be either chosen adaptively depending on the CRS or must be chosen statically independently of the CRS. Yeah, clearly the notion without a setup is the strongest and implies the other notions and the adaptive notions are also stronger than the static notions. Our goal will now be to understand this landscape of pseudo-random encodings. Let's start with deterministic encodings. Clearly, distributions with one very likely element cannot be pseudo-randomly encoded with a deterministic encoding because one encoding is bound to appear with a very high probability and then it cannot look pseudo-random if we also want correctness at the same time. If we exclude these distributions and restrict to what we call compatible distributions, we still find interesting connections. For instance, deterministic statistical pseudo-random encodings requires the notion of compression and contradicts the existence of one-way functions. Deterministic computational PRE allows to compress a distribution to its hill entropy, but since there is a separation result between conditional hill and yaw entropy, we can also rule this out, this notion out. In the following, we take a closer look on PRE with setup and the relations between them. Adaptive notions provide guarantees even if the input m for the distribution is chosen depending on the CRS, whereas static notions only provide guarantees if the input m is chosen independently of the CRS. So clearly, the adaptive PRE implies static PRE. So, but interestingly, the other direction for pseudo-random encodings is true as well. Namely, we can use two instances of statically secure PRE to get adaptively secure PRE. If we want to pseudo-random encode some distribution x, we take a static PRE scheme for x and a static PRE scheme for the setup algorithm of the first PRE scheme. We can use the CRS of the second scheme as CRS of our adaptive scheme, because the CRS distribution itself doesn't have an input, so static and adaptive guarantees for this scheme are just equivalent. Also, encoding works by sampling a fresh CRS using the first scheme for encoding the distribution x with static guarantees and encode that sample with this CRS. Additionally, this fresh CRS will be encoded using the CRS of the adaptive scheme. Like this, we can postpone the generation of the, let's call it the vulnerable CRS, because for this CRS, it does make a difference when m is chosen, but we can postpone it until after m is fixed, namely at the time of encoding, so adaptive CRS of m will not help anymore. And that means our adaptive notion and the static notion are equivalent. And by this transformation, we can simplify our PRE landscape as follows. Also, PRE has several applications which follow from the ability, just the ability to encode into the uniform distribution and being able to decode again. And a very natural example is honey encryption from the motivating example, or password authenticated key exchange, and also variants of steganography and covert computation. And also because of the equivalence between static and adaptive, this is already true for our weakest notion, because adaptive, adaptability is necessary there. As already mentioned, PRE is related to invertible sampling. And invertible sampling is the problem to recover a random tape of a randomized algorithm, given an output of that algorithm. But if we could do that, then there are no other average problems. Instead, it suffices to have an indistinguishable algorithm, a bar, that can be inverted with an inverse sampler, a bar to the minus one. Invertible sampling has received quite some attention in the field of multiparty computation. And that's basically because invertible sampling allows to sample from the distribution without knowing corresponding secret information. So basically it allows for oblivious sampling. The child will prove that invertible sampling for all PPT algorithms is equivalent to fully adaptively secure MPC for all PPT functionalities. As it turns out, pseudo-random encodings and invertible sampling are equivalent. Let's first give some slightly more definitions to prove this. The distribution can be pseudo-randomly encoded if there are algorithms E and D, such that E applied on the original distribution A looks like uniform randomness, but decoding an encoded sample recovers the original sample. On the other hand, the distribution is inverse sampled if there exists an alternative sampler a bar, which is indistinguishable from A, and samples from a bar together with the actually used randomness are indistinguishable from samples from A bar together with inverse sampled randomness using the inverse sampling algorithm. The idea behind the equivalence is that the decoding corresponds to a de-randomized alternative sampler and the encoding corresponds to an inverse sampler. First we prove that invertible sampling implies pseudo-random encodings. For correctness, let's consider the tuple D of R bar and R bar. Clearly D is a deterministic algorithm, so D applied on the right side of this tuple will yield the left side. So since D also corresponds to the alternative sampler, we can actually replace the actually used randomness by inverse sampled randomness using the invertibility property of invertible sampling. And because of closeness, we know that the alternative sampler D is indistinguishable from the original sampler A. And since D applied on the right side still must yield the left side, correctness follows. For the randomness, consider the encoded distribution E of A. Because of closeness, we can replace the sampler A with the alternative sampler A bar, which is the same as D. Finally, E of D of something corresponds to inverse sampled randomness, which we can replace with actual randomness because of the invertibility property. Now let's prove that pseudo-random encodings imply invertible sampling. To prove closeness, we start with the original sampler A. Because of correctness, first encoding A and then decoding again doesn't change much. So the randomness now allows to replace E of A with true randomness. And we have closeness. For invertibility, we start with a tuple containing of a sample from the alternative sampler together with inverse sampled randomness. Since we already proved closeness, we can replace the alternative sampler with the actual sampler using correctness and pseudo-randomness. And because of correctness, encoding and then decoding A does not change much. Also, then we can apply pseudo-randomness to replace the encoding of A with true randomness. And then we have invertibility. That means our landscape now looks as follows. Since invertibility, invertible sampling, allows for oblivious sampling, this notion conflicts with extractable one-way functions. Since sampling from an extractable one-way function should not be possible without knowing a preimage. This was observed by E. E. Scheyerdahl. We can extend this to the statistical randomized case, which yields a refutation in the plane model using LWE. And by the connection to invertible sampling, we obtain a seemingly unrelated class of applications such as non-committing encryption, adaptive MPC, succinct adaptive MPC, and furthermore, due to Dachmann-Sollert et al., there is an instantiation of statically secure invertible sampling from I.O., which was called explainability compiler. And by our equivalence results and our static-to-adaptive transformation, these applications are now all possible from polynomial I.O. Also, we have subsequent work. There is subsequent work for our work. Namely, for the weakest notion of pseudo-random encodings, we don't only have a positive result, but also a negative result, based on extractable one-way functions with unbounded auxiliary input. And due to Scheyerdahl, this notion is also contradictory to indistinguishability obfuscation. Therefore, we asked whether PRE for all efficiently sampled-able distributions can be achieved from anything that does not already imply I.O. And very recently, we and Wix settled this question by proposing a new I.O. construction from pseudo-random encodings and LWE. They basically use a very restricted form of oblivious sampling for LWE. To conclude, we provide a systematic study of several flavors of pseudo-random encodings and identify computational randomized PRE with setup as a useful and achievable notion. By the equivalence between PRE and invertible sampling, we obtain unexpected connections between several areas in cryptography. In particular, we obtain a relation between covert MPC and adaptive MPC, two different security notions for multiparty computation. And finally, PRE provides a new way to look at things, and it may find more applications. Thank you for your attention.