 pokazam, kako odliča vsega tudi zelo prisičiti vsega zelo pri stavu, če se zelo v skupov stvar. Prveti smo za skupov evdibirski generacija. Zelo vsega klujema in vsega vsega skupov vsega nekaj dojel ki učitaj na e-sv unlikely bo mene, tašnja je akืoda, da smo tudi prav in na vsega posledali kala posledali posledali posledali posledali posledali posledali posledali posledali lasnez To pa se borbečo odlada tako počelja in te spočelje, in svojo naredila pač, začnega kreža, negrošnja vsočenja vizalist, vga reactedak, ko so počelje vsemini in materijali, in zrlanja na 90 doleto. Dve vse možemo se izrešila vseminim vsočenjem, sa &publickim inkripsnicim vese, ko ne več nam izgleda publicnjeg kshop, razgal seam izgleda kurduža, kaj je inkripsniko kamenjeve, pot德 je zr Carnastikov, da se vse plagam z nepo mladneho inkripsniko hridov, as. There is noappe, kako pa ne nam izgleda, ta razgal si nekaj kripsniko, ta je inkripsniko kamenjeve, abstraction that all the C serait tax quickly is they are different from ... challenge ... ... and at the end of이랑 out this has big prime and the security requirement this interaction doesn't give much information in the sense that that if the opponents are efficient than probability that corrects the big he gets the bit correctly is not far from one over two is only negligibly close. Tukaj je bil tukaj razlog rezerle, na kjer se v sečetnih dobranih delov. Vse obrečenje tukaj pomutnih rezer, je genetija, pomembno v veti dolef, dolef, dolef, dolef na delov, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef, dolef. Vokročaj ogledam tudi najrej vseh vseh, ker je to vseh, vseh, ki je vseh, ki je vseh, ki je vseh, ki je vseh, ki je vseh, ki je vseh. Zato da me vseh izgleda boljazov kajim, tudi, zato boljazov kajim, kajim je tudi, se kajim je zelošnješnjev, in, kajim je odbičen algoritem, kajimi je generacijen algoritem vzelošnje vzelošnjev. Vzelošnjev, kajimi je vseh, kajim je da je vala početnju, včutno je vsočenja, kako je občina, vznikam je 1-1. Vejo in početnju, da je vznik, su početno je ubča, kako ga se priče, pomega. V nekaj mode, nekaj vse boljene boljene, nekaj boljene boljene, nekaj boljene boljene, nekaj boljene boljene, nekaj boljene boljene, sizes and number of, the size of the range of to the要 to the lands minus l. And the computational requirements we have for this family of functions is that given the description we cannot tell no efficient observer can tell whether, whether this describes the loss function of the injective function. And it does let me give Foexpl of how this tudi je tako pričoga. Vse bikovalo springe moroš naredile do definičnji vsom zdravne prehličnost in v tom včične, da je izgleda in v spolakih č MIL, z vse izgledaj, z vse z glasbelinom prehličnosti napal, da je pomečno vse prunutvo vmonje zdravnje prehličnosti napal. V situ najdagodne požave je napal, in je to naredil vse informacije o zelo. V zelo, Rosin in Segev skupili vse komputacijne analoge delajte vse občasne funcije. Vse je zelo, da je inovalo inovalo inovalo. Zelo, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo, koreljated inputi. And, again, instantiating these functions, from loss-interruptor function, the loss-ness amount required amount of loss-ness is, again, rather high. So, in this work, we build on the Rosen-Segius framework, especially we consider the same construction they propose in their paper, but we saw that you can instantiate these notions, so there are instantiations and input distributions that are one-way, correlated input distributions, starting from loss-interruptor function with a very small amount of loss-ness, in particular, not even a bit, a polynomial fraction of a bit is enough. As an outline of the talk, I'll first give some definition about what it means one-way in some of the correlated inputs, and briefly sketch the Rosen-Segius construction, and then I'll go to the main part of the talk, which is how to achieve CCA security, starting from loss-ness, slightly loss-interruptor functions. And I'll also give a construction of, based on modular square, squaring function of such loss-interruptor function that loses only a small amount of bits. So, let's start with a family of efficiently computable functions, and the WI's product is defined as follows. The generation algorithm simply samples W elements, W description of functions, by invoking the generation algorithm, W times, independently, with independent randomness. And on input, a vector of W components, the evaluation algorithm simply applies its function to the corresponding entry. And the one-way in this requirement is no efficient adversary given the description of the functions, and the corresponding outputs could recover the initial input. Let's say this is the definition for injective functions, where the probability here is taken over inputs sampled according to this distribution CW, and also over the randomness creating, generating the descriptions and the randomness of the adversary. And building on this notion, they presented the construction of a CCA SQ scheme, the basic components for this scheme is injective traktor function, which is one-way under CW correlated inputs. I'll elaborate more on this, what I mean, what is this distribution CW. They also need a strongly enforceable one-time signature scheme, and predicate, which is hub core with respect to these family of functions, and with respect to these distributions. And the construction is rather simple. They first invoke the generation algorithm to get W pairs of functions along with the corresponding traptors. And in the encryption scheme, every time we want to encrypt, we generate the verification key and secret key pair using the generation algorithm of the signature scheme, and then the bits of the verification key are used as selectors of which functions to use in the encryption. And now you can think of it as the randomness using the encryption, these doubles. And for the rest of the scheme, the bit is masked with the hard core predicate, and then at the end, everything is bundled together with a signature that serves mostly as a proof that the ciphertext was correctly formed. I won't get into the details of the proof. What is important for the proof is there are two requirements from this input distribution. One requirement is the main hardness assumption. So we need whatever f family of functions we consider to be one way under whatever distribution we consider. And the second requirement has to do with the CCA proof and the ability of the simulator to almost perfectly simulate the decryption, being able to reply to decryption queries. And this requires that the whole, the entire input vector is reconstructable by given only a single component. And naturally, this led Rosa and Segev to consider what they call the W repetition distribution, which is simply just pick a n-bit string, and then from x2 through xw, everything is equal to x1. Unfortunately, so this already satisfies the second requirement. But in order to prove that there exist the families of functions that are secure under, with respect to this W repetition, they needed to start from lossy traptor functions that are highly lossy in this sense. So they had to lose almost all the information from the input. Interestingly, in the full version of their paper, they give generalized construction. And this construction, the only additional component is this error-correcting code from an alphabet, with an alphabet sigma. And k is the length of the plane, where w is the length of the code words. And also this code has distance d. And the construction is almost identical to the previous one, except that here, instead of pairs, we have w-taples of sigma element its. Again, we also sample the corresponding traptors. And again, the encryption is almost similar, except that now we first encode, using the error-correcting code, the verification key, and we use the output symbols as selectors for which functions to use. And the rest of the encryption scheme is identical with before. Now, let's see, for this new construction, let's see what are the requirements. So, again, for the CCA security to go through, the hardness assumption is the same, so we have the same requirement for the input distribution. But now the second requirement is somewhat relaxed, in the sense that in order for the simulator to reply to the decryption queries, it should be the case that this entire input vector, x1 through xw, should be reconstructable from any d disting xis, as opposed to one that was the case in the simplified construction. Again, recall that this d here is the distance of the error correcting code used. So, the starting point of this work is, okay, now considering this new input distribution, if we want to start from lossy function, how much loss should we require from this lossy function in order to achieve one vaneness under this distribution? And to address this problem, we start with a very simple observation that tends out to be crucial for this construction. So, let's say we start with one way with a lossy family that loses L bits and has domained N bit strings. And consider we have an input distribution, cw, that has sufficient mean entropy. Let's say the mean entropy is mu, where mu is larger than this value over here. And now consider the two ways we can sample functions from this family. So, first we have a vector of w lossy functions, sample according to the lossy, in the lossy mode, and w sample according to the injective mode. Now, if we get this output vector, so apply the input one by one, apply to the functions one by one to the input components. And because of the lossiness property, this vector over here takes at most two to the w times N minus L possible values, whereas because of injectivity, this vector over here takes two to the N times w possible values. This, in particular, means that now given the whole, the entire vector over here of outputs, if the input distribution has entropy mu, even given now this output, there is still sufficient remaining entropy, or stated in a different way, there are super polynomial many possible preemats, is given these functions. Whereas, if the function were sampled according to the, in the injective mode, there is only a unique, there was only a unique preemats. And now we are using also the fact that these w descriptions, the sample according to the loss mode are computationally distinguished from the w function sampled according to the injective mode. And so, because of this super polynomial in many preemats and unique preemats over here, it's not very hard to see that any inverter that could recover the entire input vector given this could serve as a distinguisher between those two, which is not possible by the assumption on the loss trapdoor function. So, stated otherwise, if we start from a family of loss trapdoor functions that lose L bits, then take the w-wise product formed by the injective family of the loss collection, and this family turns out to be one way under any distribution, as long as the entropy of the distribution is high enough. And so, also another observation here is that the lossiness and the entropy is, they are correlated in a negative way. So, since our goal here is somehow reduced the required lossiness, equivalently, we want to kind of find distribution that they increase this, the entropy of the input. And this gives the way to go, so we need these two properties. The first property is that we need this reconstructability property. Given any d components of the input vector, we should be able to reconstruct the entire input vector. Again, this is for the simulation, the CCA security proof. And fortunately, we know ways how to sample for such distribution. And one can think of the d out of w threshold secret sharing schemes, where we can sample vectors that have this property, this reconstructability property. And one other important thing is that, in some sense, this distribution are what we call d-wise independent distributions. So, the overall entropy of this distribution is d times n, if we consider each component of the input is an n-bit string. And now, somehow, the goal is to, for a fixed time, we need to increase this d as much as possible. Now, this takes us to the second idea. Recall that this d was exactly the distance of the error correcting code. And in the encryption scheme, starting from two verification keys, vk1 and vk2, and considering the encodings under the error correcting code, the basic property that we desire is that for any two different verification keys, we want the corresponding encodings to be as far apart as possible. And this translates to the requirement of having an error correcting code with high distance. And any code that belongs to the family of MDS codes, maximum distance separable codes, seem to be enough. In our instantiation, we are using read Solomon codes that do meet the singleton bounds. So, the distance d can be as large as w minus k, where w again is the length of the code word, and k is the length of the input of the plane word. So, I try to put everything together, and so how to get cca security from n comma one loss to trap their functions. Again, we have this error correcting code, read Solomon with this high distance, and again the input distribution with d times n, entropy, and this reconstructability property. And let's say k equals m to the epsilon for some constant epsilon, whereas w, how bigger product is, how dire product is, and then to the theta for some theta larger than one to the epsilon. So, if you do a math, the overall entropy of the input distribution turns out to be larger than this, strictly larger than this quantity, and then going back to our lemma, if you plug in one here instead of l, we show that there exist input distribution that have high enough entropy, and so the corresponding family n comma one loss to trap their functions starting from n comma one loss to trap their functions turns out to be secure one way under such kind of distributions. And this, in particular, by instantiating with this loss to trap their functions, the construction of Rosen and Segev, we get that n comma one loss to trap their functions implies cca security in a black box way. And choosing the parameters here more aggressively or doing some kind of loss simplification, you can drive this down to anything which is one of our polling. And as a quick overview, so the two parameters that govern the lossiness, the required lossiness is this distance d and the w, the size of the components of the product, and in the initial distribution d was one, which means one means that we can recover the whole input vector by just a symbol component, and this gave a highly correlated distribution, so distribution with very low mean entropy, and this was the reason that the lossiness required was rather high. So we saw how the Rosen and Segev if instantiated correctly with real Solomon codes and high mean entropy, distribution can take us down to the whole, to this very small amount of required lossiness. And so let's see how the picture has changed. Before this was the threshold that the required lossiness to get cca security in a black box way from lossy trap functions, and the only construction that we knew that achieved this much of lossiness was VDA, it's already by a pike waters paper. And now by reducing the required lossiness to this level already we get the LWG already has enough lossiness, to instantiate the generic construction and also the QR, the modular square function from the previous talk and also the RSA function from the forthcoming paper based on a fill hiding assumption. So let me now finish the technical part of the talk by giving a simplified, a simple construction of a slightly lossy trap function. Let me start with the hardness assumption first. So on the left hand side there is a, we sample a modulus N using two primes. So this is a product of two primes and on the other hand we have N prime being a product of three primes and they have the same bit length. So the hardness assumption is that these two ways of sampling give moduli that are indistinguistable. And given this hardness assumption there is a quite natural way to construct a family of function, namely the injective generation algorithm samples N plus one bit modulus N which is a product of two primes and the trap is the factorization of this modulus. The lossy is a product, which is a product of three primes and the evaluation algorithm is quite similar with the construction you saw in the previous talk. We again use the squaring function and then we add two extra bits in order to make the function injective. And in the paper we prove that under this assumption the family is N comma one over four lossy trap function and the proof can be sketched briefly as follows the indistinguistability of the loss and the injective mode is immediate from the assumption itself and then for the invertibility the first component, once we have the factorization, we can invert this first component and get at most these four preimages and then by using the second bit we can filter out two more two more preimages and then using the Jacobi symbol we can be left with just a single candidate preimage which is exactly the preimage of the function and lastly for the lossiness one way to go is we partition the input so we consider functions that have N bit strings as their domain and so when we think we start with co-prime inputs co-prime to N because this is a product of three primes this is A to N mapping so already the image of this set is quite small is bounded by this quantity and now these are already upper bounds for the size of the sets of these sets themselves not necessarily the remands so if you add everything up which is upper bounded by 2 to the N minus 1 over 4 so the image of the N bit strings of the whole set of N bit strings is upper bounded the size is upper bounded by this quantity which gives us the 1 over 4 lossiness and let me just finish with some conclusions in summary we do believe that even slightly lossy drabder functions are powerful primitive in particular we saw how to use the lossy drabder functions in a black box way and get CCA secure public key encryption starting from lossy drabder functions that have a very small amount of lossiness and was also the way to constructing a slightly lossy drabder function using a 2 versus 3 primes assumption and as possible direction for future research one main question is what does a slightly lossiness really buy so in particular is it possible to construct slightly lossy drabder functions and therefore CCA security from hardness assumption that we don't know how to get CCA security from and the other question is we constructed CCA security in a black box way is it possible to construct other primitives again starting from slightly lossy drabder functions thank you