 Hello, my name is Thomas Aktikola and I will present the paper The Usefulness of Specifiable Inputs How to Avoid Sub-Exponential I.O. This is joint work with Shofra Kutur and Dennis Rufans and the work was done while all authors were at KIT. Let's first start with what is indistinguishability obfuscation. Indistinguishability obfuscation is a method to transform programs into unintelligible ones while maintaining their functionality. So it can be seen as a compiler which takes a program, compiles it into a program that computes the exactly same function but is unreadable or unintelligible in a certain sense. Unintelligible here means that if we have two programs P1 and P2 that have the same input-output behavior so that computers have function and we obfuscate them, then their obfuscations are guaranteed to be indistinguishable. I.O. is extremely powerful. We can build almost anything we can think of from I.O. plus some comparatively mild assumptions such as one-way functions. Even applications like denial and encryption which seem beyond our reach for a long time are now possible with I.O. However, many of those applications actually involve sub-exponential reduction loss relative to I.O. Those are highlighted in red here. We are currently still on the lookout for I.O. candidates which are based on standard assumptions in the standard model and the security of existing I.O. candidates is still not so well understood. So it's possible that in the end sub-exponentially secure I.O. is much harder to get than polynomially secure I.O. And that's why we are interested in avoiding this sub-exponential loss relative to I.O. There are several previous works on avoiding sub-exponential reductions in the context of I.O. A popular strategy there is to just avoid I.O. entirely and start directly from functional encryption. And this results in the end in an entirely polynomial reduction. This road led to several improvements, for example in the domain of short signatures, universal sand plus, non-interactive multi-party key exchange, triple-one-way commutations, multi-key functional encryption and several others. However, those supported operations of these approaches, they are rather restricted. Our goal here is something more general. In many applications, which currently require sub-exponential I.O., they use an abstraction called probabilistic I.O. Highlighted in yellow. In this talk, we will focus on these quite general subclass applications. In fact, probabilistic I.O. can be seen as a generalization of sub-exponential I.O., which makes the task of avoiding sub-exponential I.O. particularly challenging. We even believe that some sort of exponential assumption is inherent for probabilistic I.O. So our goal is not to avoid this entirely, but to push this sub-exponential loss away from I.O., which is a novel and rather shaky assumption at the moment, to some well-studied and well-analyzed assumption. Okay, let's define probabilistic I.O. Normal I.O. takes a deterministic program and compiles it into an unintelligible one with the same input-output behavior. In this context, unintelligible means that if two programs P1 and P2 behave identically, then their obfuscations are guaranteed to be indistinguishable. Probabilistic I.O., on the other hand, compiles a randomized program into an unintelligible deterministic one. So clearly, the obfuscated program cannot behave exactly as the original randomized one, just because there is no randomness. But for this talk, it will be enough to ignore this correctness issue and say that the obfuscated program behaves similarly in a suitable sense. In the context of P.I.O., unintelligible means that if two programs behave functionally indistinguishable, then their obfuscations should be guaranteed to be indistinguishable too. But why does this notion of probabilistic I.O. require sub-exponential I.O.? P.I.O. provides this indistinguishability guarantee even if the obfuscated programs satisfy this rather weak functional indistinguishability requirement. And this gap captures a vast class of programs. For instance, programs which output an encryption are always functionally indistinguishable, even if what's encrypted varies a lot amongst these programs. Maybe even if their output supports are disjoint because they encrypt different messages. At TCC 2016, Kaneti, Lin, Tesarro and Wakuntanatam constructed probabilistic I.O. Their strategy is to first make the randomized program deterministic by deriving its random coins via a PRF from its input and then use I.O. to obfuscate this deterministic program. But I.O. security can still only be applied if the programs behave fully identically. In our example, P1 and P2 behave very differently and might even have disjoint output domains. So direct reduction to I.O. just can't work. But Kaneti et al. use this one input at a time hybrid argument for all possible inputs including the randomness. And of course this is an exponential amount of hybrids, of computational hybrids they need to make in most cases of interest. So that's where they lose this exponential factor. Our goal is to use a similar approach but reduce this number of hybrids to a polynomial amount. Our main tool to achieve this goal are extremely lossy functions as introduced by Sentry at Grypto 2016. Elves are functions that offer two indistinguishable modes an interactive mode where the pre-image and the image size are both exponential and an extremely lossy mode where the image only has polynomial size. This is an extremely strong requirement we have on elves but they can indeed be built based on the exponential DDH assumption which is compared to I.O. a very well studied assumption. And actually this assumption is rather popular in practice in particular current key length recommendations for certain elliptic curve groups assume exponential DDH. By using elves, of course we cannot end up with an entirely polynomial reduction. But again, we believe that some sort of sub-exponential assumption is indeed inherent for probabilistic I.O. So we can still try to push the sub-exponentiality away from I.O. to the much more well-studied DDH assumption and which is already a great accomplishment if we can do this. Okay, how could we use elves to reduce the number of hybrids in the Kamiate at I.Construction? A first try could be we apply the elf directly on the input X such that the number of hybrids becomes polynomial if the elf is in lossy mode. Okay, why doesn't this work? The problem is that if we pre-process the inputs to the circuit or to the program with some arbitrary hard to invert function and then evaluate the program on the result this will not preserve the expected function of this program. So this approach cannot work. Instead we follow a very different approach. Namely we observe that many P.I.O. applications share a common ground in how they use P.I.O. obfuscated programs. Namely in many applications P.I.O. obfuscations are only fed with inputs X which are sampled according to some application-dependent distribution. Let's call them D of M here. D of M could for example be a distribution of a ciphertext of some given message M or it could be just a distribution of a public key encryption keys. In our approach we leverage this fact as follows. We apply the extremely lossy function on the random tape of this input distribution. And as a result the number of inputs or the inputs can be sparsified without losing their structure. So evaluation on those inputs, those sparsified inputs still has the expected result because it's just a fewer amount of inputs. But still the same meaning. We abstract this property and provide a framework which we call doubly probabilistic I.O. And this framework captures these cases. Okay, what's the difference to normal P.I.O.? In contrast to P.I.O. we are now in the CRS model and we need to compile the input sampler such that it additionally outputs some auxiliary information which we will use to verify that the produced input X is actually valid in some sense where when it means that it is sparsifiable with the elf. Also the obfuscation, the DPIO obfuscation of some program P additionally needs to depend on the sampler trust in order to make it possible to verify that X was computed using the sampler on elf pre-processed random coins. And if this is not the case we restrict the correctness of this obfuscation and just don't do anything meaningful. Okay, this diagram describes now one source DPIO scheme so for one input programs but we can of course arbitrarily extend this to two source or L source with multiple input distributions as long as the number L is smaller. Okay, how do we instantiate this with polynomial I.O. and extremely lossy functions? Given some input sampler D the compiled input sampler looks as follows instead of sampling the X.I. with D on uniform random coins we pre-process these random coins with the extremely lossy function. Furthermore the compiled input sampler produces a kneesic proof as auxiliary information which proves that the X.I. was computed with these elf pre-processed random coins and we do not need to obfuscate this compiled sampler it's just redefined without need to protect what is inside. The DPIO obfuscation of some program P with respect to the input sampler D is now an I.O. obfuscation of the program which first verifies the kneesic proofs in the auxiliary information and if verification succeeds it derives random coins deterministically via a PRF applied on the X.I. but not on the auxiliary information just on the X.I. and basically that's how Carnetti's construction worked. If verification fails we output an error symbol and restrict the correctness in this case if we switch the elf to being lossy now the number of valid inputs X.I. so the inputs where the obfuscation does not abort decreases to a polynomial amount if we additionally assume that the number of possible M.I. is also small. Then the one input at a time hybrid argument due to Carnetti et al only needs to be done for this polynomial amount of inputs and only needs to a polynomial reduction relative to I.O. Okay, how can we use this this new framework? We build leveled homomorphic encryption this is a public key encryption scheme which supports homomorphic evaluation of fixed steps circuits so it's basically like fully homomorphic encryption just a little restricted to fixed steps circuits so it additionally provides an evaluation algorithm such that if we have two messages M1 and M2 we are guaranteed to obtain the same result if we first equip them put them into the evaluation algorithm the ciphertext and then equip the result or if we just directly evaluate C on the M.I. Carnetti et al construct LHE from some PKE scheme and then the evaluation algorithm using P.I.O for simplicity in this talk we only consider a single NAND gate which is enough to understand the construction Carnetti et al construct the evaluation algorithm as a P.I.O. obfuscation of the circuit which first decrypts its two inputs then evaluates the logical NAND on these results and finally re-encrypts this result using the key of the next level of the level encryption scheme this construction indeed meets the requirements for our construction namely the the inputs to the circuit are sampled from some distribution in this case the distribution induced by encryption and the inputs M.I. to those distributions come from a small domain because in this case it suffices to consider bit encryption schemes so we now adapt this construction using our D.P.I.O. framework and highlight the key differences so what do we need to change we need to replace the input distributions so the ciphertext distribution with the compiled input samplers to make sure that these inputs are satisfiable with the ELF additionally the obfuscated circuit does not just encrypt and result directly but also uses the compiled input sampler just for the next level like this we can make sure that our specification procedure works also for multiple levels and as a result we obtain level homomorphic encryption from simply polynomial I.O. plus extremely lossy functions which is still an exponential assumption but the exponential loss is not relative to I.O. the jakey assumption but relative to D.D.H. a well studied assumption ok to summarize I.O. is extremely powerful and yields a vast amount of applications so far many of these applications involve a sub exponential loss relative to I.O. particularly the ones relying on probabilistic I.O. our D.P.I.O. framework allows to push the sub exponential reduction loss away from I.O. which is a novel and rather shaky assumption to the much more well studied exponential D.D.H. assumption our D.P.I.O. framework can be applied to improve several security reductions for instance we avoid the need for sub-exponentially secure I.O. in the context of leveled homomorphic encryption fully homomorphic encryption spooky encryption and homomorphic secret sharing now there is subsequent work by D.D.H. and Nishimaki who use our framework to avoid sub-exponential I.O. for universal proxy re-encryption ok thank you for your attention