 Okay. So the third speaker is Damiano Abram. I is going to talk about distributed correlation samplers, which is joint work with Peter Scholl and Sofia Jakubov. So hi everybody, my name is Damiano and now I'm going to present our paper, which is titled distributed correlation samplers, how to remove a trusted dealer in one round. This is the result of a collaboration between myself, Peter Scholl and Sofia Jakubov. So the main contributions of our work are two, and the first one is the introduction of new cryptographic primitive cold distributed samplers. These are basically a one round protocol that allow N parties to generate any CRS in a secure way, using only, as I said, one round of interaction. For instance, we can use them to generate an RSA modular without leaking the factorization and using only one round of communication. In the paper, we present definitions of distributed samplers under different security flavors, and then we present the first constructions of this type, building them from polynomially secure IO. The second contribution is the study of public key PCFs. PCF stands for student random correlation function, and basically the R and party protocols that generate large amount of correlated material with sublinear communication in the size of the outputs and only one round of communication. The primitive was introduced already by Orlandi et al. at EuroCrib 2021, but in their paper, they present constructions that work only for OT and vector only correlation. In this work, instead, we formalize the notion of public key PCFs, and we present the first constructions that work for any correlation. Again, we build them from obfuscation. I will start by talking about distributed samplers. We are in the n-party setting, and we allow up to n minus one corruptions. Our goal is to generate a sample R from the distribution D in a secure way, and we model the distribution as a polynomial time algorithm that takes as input the security parameter and random coins. We want to design constructions that use only one round of communication, so each party PI will send a single message UI simultaneously to all the other parties. After that, everybody is able to recompute the output by simply applying a deterministic function on the transcript. Here, you can notice that the output is public, so even an adversary that just descends to the communications is able to retrieve it. We consider two security settings. The first one is against no rushing semi-malicious adversaries. A semi-malicious adversary is like a semi-honest one, so it has to follow the protocol, but it can also choose the random tapes of the corrupted parties as it likes. Since this adversary is non-rushing, the choice of the random tapes has to be made before the messages of the honest players are delivered. In this setting, the functionality implemented by distributed samplers is really simple, so it just generates a sample from the distribution and it outputs it to all the parties, including the corrupted ones. We also analyze the active security, and here we have to modify the functionality a little bit. In particular, we have to allow the adversary to query the functionality for samples. The adversary can issue as many queries as it likes. At a certain point, it can choose one of the samples it received, the one that it likes the most, and it can instruct the functionality to output the value to all the honest players. Now, I would like to explain why we need this particular functionality, and the reason is Russian behavior. Suppose that the adversary corrupts only P2MPI, using Russian behavior. It can obtain the messages of the honest players before it sends the messages of the corrupted parties. At that point, it can rerun the protocol in its head many times, regenerating the messages of the corrupted parties only and obtaining many different samples. It can choose the one that it likes the most, and it can send corresponding messages of the corrupted player in the real protocol. Now, it is sure that the honest parties will output the value that it pre-computed. In the active secure case, we have exactly this kind of functionality. First, the adversary receives a polynomial number of samples, it chooses the one that it likes the most, and it outputs, and it forces the honest parties to output it. What are our results? In the Sumimalicious case, we designed distributed samplers for any distribution D in the plain model, and using only poly-nominally secure primitives, in particular IO and multi-KFHC. In the active secure case, we designed again distributed samplers for any distribution D, but we had to rely on a random oracle. Again, we just used poly-nominally secure primitives, IO, multi-KFHC, and Nizix. Okay, now I would like to sketch our Sumimalicious construction. I will start from an attempt that doesn't really work, but it gives a good idea of the main techniques of our construction. We want to generate the sample R from the distribution D, and the first challenge that we encounter is to choose the randomness that we feed into D. This randomness has to be chosen jointly by the parties, so we let each party PI to sample a share of it, and what we input into D is the XOR of R1, R2, Rn. Clearly, we cannot compute this operation in clear, otherwise, everybody would know the randomness that produced the output, so instead we perform it in an encrypted way using multi-KFHC. Each party PI will send a public key, along with an encryption of its share, and at this point everybody would be able to obtain an encryption of the final output using the homomorphic properties. The issue is nobody's able to decrypt it because the parties don't know the partial decryptions, so they would need an additional round of interaction. Distributed samples are one-round protocols, so this is not allowed, we have to find another way, so we rely on obfuscation. Each party PI now sends an obfuscated program called the evaluation program, along with its public key and its cipher text. This obfuscated program will contain the secret key of the highest party, hardcoded. It will take its input all the ciphertext, it will evaluate the distribution homomorphically on the inputs, and finally it will perform the partial decryption using the hardcoded secret key. So using the evaluation programs, the parties are able to obtain all the partial decryptions in one round, so they can just run the final decryption algorithm and retrieve the output. So the solution is correct, it works only in one round, the question is whether it is secure, and the answer is no. There are two problems, and the first one is associated to an issue that all one-round NPC protocols have, namely that the adversary can rerun the protocol in its head many times, changing the messages of the corrupted players and still receiving an output. So suppose that P1 is the only corrupted party after the 100th execution, the adversary can rerun the protocol in its head, but he doesn't send an encryption of R1, but an encryption of R1 XORed with an error if. It still receives an output because it's a one round protocol, and this output is a sample RE from the distribution D. RE is correlated to the actual output of the protocol. Indeed, we use this randomness, the XOR of R1, R2, Rn, and the RRE that is known to the adversary. This is an issue because it may be possible that all these values may permit to retrieve the randomness R that was used for the actual output. So we need to fix this. The second issue is instead connected to the evaluation program, and in order to understand why, suppose that PI is the only honest player. So the security of the construction requires that RI is kept secret, but in the protocol, we send an encryption of RI to the other parties. So we have to rely on semantic security, and semantic security holds as long as we leak new information about the secret key. But now the secret key is encoded into these evaluation programs. So we need to argue that this program doesn't leak information. How do we do this? Well, in the security proof, we will substitute this program with another one that is indistinguishable, but that doesn't contain SKI hardcoded. Now this is tricky because the number of potential inputs that the adversary can evaluate on this program is exponential. So if we want to remove SKI, we need to change all an exponential number of executions of this program, and that requires exponentially many hybrids. That would mean requiring a sub-exponentially secure primitives, in particular IO and multi-key FHC. They are strong assumptions. We don't want them. We need to find another solution. We change our approach. This leads to the second attempt. Now we want to make sure that each party PI uses a different random string and a different key pair for every choice of the messages of the other parties. So I explain it better. We want that if any of the other parties changes its message, either in the real execution or in one of the executions in the head of the adversary, party PI uses an independent-looking random string and an independent-looking keeper. How do we obtain that? Well, we don't send the public keys and the cipher text directly to the other parties. We send instead an obfuscated program called the key generation program. The goal of this program is to generate the public keys and the cipher text of party PI. Formally, this program has a puncturable PRF key R-coded, and it takes as input all the key generation programs of the other parties. Here, actually, there is an issue. We cannot do this, but for the moment I ignore it, and I will talk about this later. The first operation that the program performs is to feed the input as known to the PRF, and we obtain two random strings. The first one is RI, the share of party PI of the randomness, the one that we feed into the distribution, and the second one is a random string called R-head. R-head is fed into the multi-key FHC key generation, so from it, we extract a keeper, and the final operation that the program performs is to encrypt RI under the public key it just generated. The output is PKI and CI. What do we notice here? We notice that if any of the other parties changes its message, the nouns of the PRF changes, we get independent-looking randomness and so independent-looking keepers. That's exactly what we want. Since we change the way we generate the public keys and the cipher text, we need to change also the valuation program a little bit, but the main ideas are the same as before. Now we hard code the puncturable PRF key. We give as input all the key generation programs of the other parties. The valuation program needs to retrieve the secret key for the partial decryption. How do we obtain that? Well, we have the puncturable PRF key. We just repeat the variations performed in the key generation program. The valuation program needs also the public keys and the cipher text of the other parties. We obtain them by running the key generation programs of the other players that were given as input. All right, now we have a solution that is correct. It uses only one round and it is, in principle, secure. There is only one issue, namely that the program that you see on the top, the key generation program, doesn't exist as it is now. The issue is that the inputs are too big. Indeed, obfuscated programs are secrets. We cannot feed them with inputs that are bigger than themselves. This is exactly what happens. We feed M-1 key generation programs to its key generation program. That cannot happen. So, what's our solution? We hash the key generation programs of the other parties before giving them to KGPI. We can do this because the inputs are just used as a nonce. Furthermore, if any of the other parties changes its methods by the collision resistance of the hash function, we have different digits, different analysis for the PRF, independent looking random strings, and so an independent looking keeper. To make this argument compatible with IO, we use a specific hash function that is called somewhere statistically binding, but after this, we have a secure construction that is correct. Okay, so this is our simulation construction. How do we upgrade it to active security? Well, we designed a compiler called the anti-russia compiler that takes a one-round protocol that is semi-malicious in the plain model and compiles it into an actively secure protocol that still has one round in the random oracle model. The trick at the base of this compiler is the delayed vector programming technique by Hofheinz et al at AsiaCrypt 2016. So, using it, we obtain an actively secure distributed sampler. This is all I wanted to say about distributed samplers. Now, I would like to talk a little bit about public key PCFs. We are still in the end-party setting and we allow up to a minus one corruptions. Now, we don't deal anymore with the distribution, but with a correlation function. This correlation function outputs and correlated values, one for each party. Our goal is to design protocols that generate and distribute these correlated values in a secure way, so party PI just must learn only its output R i. We want to generate many samples, using only one round, and sublinear communication in the amount of samples that we produce. If we notice the role of these messages in this protocol, we notice that party PI is the only player that must learn R i. So, it has to leverage the fact that it's the only player that knows the randomness used to produce its message. So, this randomness acts as a private key, whereas the message itself acts as the public counterpart. That's why we call them public key PCFs and we model them by saying that each party PI sends a public key keeping the private counterpart secret. How do the parties retrieve their samples? Well, they take announce X, which can be public. Each party PJ just runs an evaluation algorithm on all the N public keys, its own secret key and the non-sex. In this way, it obtains each part of the sample and the operation can be repeated for every non-sex without any other interaction. Okay, we designed different public key PCFs. Some of them have semi-medical security, some other have active security. We use polynomial insecure primitives. Sometimes we use sub-exponentially secure IO. Sometimes we are in deeply model. Sometimes we are in the random oracle model, but all the techniques are based on the same idea. We start from a simpler setting in which we allow a CRS. Each party PI generates a public key encryption keeper sending only the encryption key. Our CRS will consist of an obfuscated program. It will contain a puncturable PRF key hardcoded. It takes as input the encryption keys and announce X. We feed the inputs as a nonce to a PRF. We obtain a random string R and we feed it into the correlation function, obtaining then correlated values. Next, we encrypt each of these values using the encryption key of the corresponding parties and we output the encipher text. Everybody is able to retrieve the encipher text, but only party PI is able to decrypt CI and receive its output. This is a public key PCF in the CRS model. How do we get rid of the CRS? Well, we use distributed samplers. Distributed samplers are one-round protocol that generate any CRS. We use one for this particular distribution. The parties need to send the message of the distributed sampler along with the encryption key. That's all I wanted to say. This is a slide with a summary of our results. I thank you for your attention. Okay. Thank you, Damiano. We have questions. Any questions for Damiano? Okay, so thank you again and there is a break waiting for us.