 The session is about pseudo-random functions. The first talk is about constraints, pseudo-random functions for unconstrained inputs. And Venkata is going to give the talk. Thank you. So I'm Venkata, and I'll be talking about constrained pseudo-random functions for unconstrained inputs. This is joint work with Apulva Deshpande and Brent Waters. So we all know what pseudo-random functions are. A pseudo-random function is a keyed function f with a keyed space, key space k. And the function evaluations look like uniformly-random strings. This has numerous applications in cryptography, and it's one of the central building blocks. In 2013, three groups, Boney et al, Boyl et al, and Kias et al, proposed a powerful extension called constrained PRFs. So in constrained PRFs, we have a keyed function f and a key space. And we also have a constraint algorithm. The constraint algorithm takes as input a PRF key k, and it takes a constraint t, and it outputs a constrained key kt. Now, as the name might suggest, this key can be used to evaluate the PRF at all points that satisfy the constraint. So more formally, so for all inputs x, such that x satisfies the constraint t, we have f of k comma x is equal to f of kt comma x. Intuitively, for security, we require that if an adversary has a key for some constraint t, then he or she should not be able to evaluate the PRF at points that are outside the constraint. A natural question to ask is, how do you express these constraints? So we have constructions for different constrained families. The most basic family is called puncturable PRFs. And in this case, the key can evaluate the PRF at all points except the punctured points. So you have keys corresponding to input points, and that key can evaluate the PRF at all inputs that are not equal to the punctured point. And it turns out that the GGM84 PRF construction already gives us puncturable PRFs. So we can get these from one-way functions. And these have been really useful in the context of indistinguishability obfuscation because of the punctured programming approach as introduced by Sahain Waters. A more expressive functionality is called the bit fixing functionality, where you have a key for a string s. And this string s fixes some bits, and the other bits are unfixed. And this key can be used to evaluate at all inputs, which match the fixed bits. And Boney and Waters showed a multi-linear maps-based construction, and this has applications and optimal broadcast encryption. In fact, in the same work, Boney and Waters actually showed that you can construct the most general family of constrained PRFs called circuit constrained PRFs, where your key is corresponding to a circuit c. And you can evaluate the PRF at all inputs that are accepted by the circuit. So Boney and Waters showed a multi-linear maps-based construction, and later, Boney's and Waters showed an IO-based construction. And this has direct applications and identity-based non-interactive key exchange, and if you combine this with IO, you also get something called traded tracing. So given this, it might seem that we have everything that we can ask for, but circuits have one major restriction, and that is that circuits can handle only bounded-length inputs. So this means that the identity-based non-interactive key exchange scheme can only work for an a priori bounded number of users. So then a natural question to ask is, well, can we construct something for an unbounded number of users? And that would require you to have constraints which work for unbounded-length inputs. So this question was first studied by Awusala Fuchs-Bauer and Petershek in 2014, and they gave us constraint PR construction where the constraint could be expressed as a Turing machine. And so this is probably the most general thing that you can have, and this gives us identity-based non-interactive key exchange for unbounded users and also broadcast encryption for unbounded users. However, the one limitation here is that this construction is based on something called knowledge-based assumptions. So let me talk a bit more about knowledge-based assumptions, and for that, I'll have to deviate a bit and talk about code obfuscation. The goal of code obfuscation is to make programs maximally unintelligible. So you have a program P. You obfuscate the program to construct another program P prime. And this program P prime is functionally identical to the program P. However, the hope is that this P prime hides the secrets that are inside P. So how do you formalize that? There are various security notions. The strongest one is called virtual black box obfuscation, which essentially says that if you give an adversary obfuscated code, it's the same as giving the adversary oracle access to the code. So the adversary only learns the input of behavior, but does not learn anything about the implementation. Unfortunately, this is too good to be true, and we have certain impossibility results, as shown by Barakatal. A weaker notion is called differing input obfuscation, where the security guarantee is as follows. If you have two circuits or two programs, P1 and P2, and their obfuscations are distinguishable, then there exists an extractor that can extract a differing input. So in other words, if the programs P1 and P2 are such that it's hard to find a differing input, then their obfuscations are computationally indistinguishable. Now, even for this, we have certain implausibility results, as shown by Boyle et al., Garg et al., and in a recent work by Bilal et al., we'll hear more about this on Thursday. A weaker notion which evades all of these implausibility results is called public coins differing input obfuscation. So I won't talk about what this notion is, but this also has the same extractability kind of assumptions involved. And finally, the weakest of these assumptions is called indistinguishability obfuscation, where the guarantee that you have is that if two programs P1 and P2 are functionally identical, then their obfuscations are computationally indistinguishable. So you have this broad spectrum of security assumptions, and if you're on the green side, it's believed to be good, and as you move more and more towards the red, it becomes more and more risky. So going back to the construction of Abu Salat et al., their construction is based on public coins differing input obfuscation, which is an extraction-based security notion. So a natural question would be whether we can construct constrained PRFs for Turing machines directly using IO. So that's the central question of our work. Can we build constrained PRFs for Turing machines based on indistinguishability obfuscation? Now, it might be a bit tempting to use IO for Turing machines. So we know how to get circuit constrained PRFs using IO for circuits, as was shown by Bonnet and Zandri. And we also know how to construct IO for Turing machines based on IO for circuits. So now it might be, we might think that you can get Turing machine constrained PRFs directly from IO for circuits. Unfortunately, this does not work out because the Turing machine IO works only with bounded length inputs. So we cannot use this result directly, but what we can do is we can use some of the tools that were involved in the KLW construction to get Turing machine constrained PRFs. So that brings us to our results. Assuming IO for circuits and runway functions, we show a constrained PRF scheme for Turing machines. And as it turns out, a similar construction also gives us attribute-based encryption for Turing machines. So we get this almost for free. So in this talk, I'll only focus on the first result. Assuming IO and runway functions, we get constrained PRFs for Turing machines. So let me now define the security for constrained PRFs. We'll be looking at the selective security game where you have a challenger and an adversary. The challenger first chooses a PRF key key. The adversary then sends its challenge input X star and it receives either the PRF evaluation or a truly random string. And then there's a key query phase where the adversary is allowed to send Turing machines M sub i with the constraint that M sub i of X star is equal to zero. And the adversary receives a key for M sub i. So finally, after polynomially many such queries, the adversary finally sends its guess whether Y star is a pseudo random string or truly random string. So once again, coming back to IO for circuits, if you have two circuits, C1 and C0 and C1 that are functionally identical, then we want that indistinguishability obfuscation guarantees that their obfuscations are computationally indistinguishable. And the good thing is that starting with the work of Garg Gentry, Halevi, Reiko, Asahi, and Waters, we now have a number of candidates for indistinguishability obfuscation. So now let's start looking at the construction and to begin with, let's look at how to get PRFs for unbounded inputs. This is trivial, right? So suppose if you have a PRFf which works with bounded length inputs, then to get unbounded length inputs, you can just hash down the input and then apply the PRF. So suppose you have a long input, then you can apply the hash function in a Merkle tree fashion and then you get your string V and you can output f of k comma V. So the reason I mentioned this trivial thing is because our construction also starts off with this and our base PRF is actually very similar. So in our construction, our base PRF looks as follows and it's very similar to the one I showed before except that we require something called puncturable PRF functions, puncturable PRFs and we require special hash function h. So both of these are only for the security proof and I'll talk more about these shortly. And the constraint key is a bit more interesting. So we have a next, so every machine m is associated with a next step function. So this next step function takes the state and the symbol as input and outputs a new state and a new symbol and the head movement. So our constraint key for such a machine m consists of the hash function h together with two obfuscated programs. The first one is program iterate and the second one is program start. So program start basically takes a string V and it outputs a signature on the starting state and the string V. And looking ahead, if someone wants to use this program, they would hash the input to the string V and then compute a signature on the starting state from the string V. The program iterate is slightly more complicated and essentially it's running the next step function iteratively. So it takes the timestamp P, the position P and the symbol and the state and a hash of the input V. And it computes the next step and the next step function which is basically computing the next state and the next symbol and if the state is the final symbol, it outputs f of k comma V. Otherwise it outputs the strings, otherwise it outputs a new state and the new symbol. However, this is clearly not secure because you can run this on, the adversary can run this on arbitrary inputs. So we want to somehow prevent the adversary from running this on arbitrary states and arbitrary symbols. So for that, we'll need some additional inputs. The first one is an input H which will be the hash of the working tape and using H, the program verifies that the symbol is at the position P. So in this way, the adversary cannot send a wrong symbol at position P. And similarly, we also require a signature. So the program first verifies the signature on input, the state and the hash. And at the end, it outputs a signature on the new state and the new hash. So this is what the program looks like. It takes a timestamp P, a position P and it takes a state and the symbol. It takes a hash of the input. It takes a hash of the working tape and the signature. It first verifies that symbol is at position P and then it verifies the signature on the state and the hash, computes the next step, using the new symbol, it updates the hash and then outputs a new signature. So there are two things that I did not talk about is how do you verify that a symbol is at a position P and how do you update your hash efficiently? Well, these are easy with Merkle trees. So if you want to prove that a message M is at a position P, all you need to give is the hash function values from the root to the node. So all you need is a logarithmic number of hash values. So giving a proof is easy. And similarly, to update your hash, all you need is the intermediate hash values from the root to the node. So once again, you can do this only with a logarithmic number of hash values. So till now, we have not used any special properties of our hash function. And this would work for any arbitrary hash function. So the specialness comes in the security proof. So let's go back to the security game where the challenger chooses a key k, the adversary sends an input x star, and the challenger either sends f of k comma hash of x star or a random string. So let v star denote the hash of x star. And finally, the adversary. For now, let's just look at one message query, one machine query where the adversary sends a machine m such that m of x star is 0. And the challenger outputs the hash function h together with two obfuscated programs. Now the problem here is that the program iterate contains the prf key k, which means that we need to somehow argue that this program does not reveal f of k comma v star. So at this point, we use the puncturable prf property, and we want to argue that if we replace the key k with the key that's punctured at v star, even then it's fine. So what we want to argue is that these two programs are computationally indistinguishable. However, we cannot use the security of IO directly because these two programs are clearly different functionally. The hope is that the adversary cannot find an input x such that it hashes to v star. However, to work with IO, we need certain IO-friendly primitives, and common general collision resistant hash functions do not work. So in our construction, we require something called positional accumulators, which is a stronger notion of hash functions. And we require splitable signature schemes, which are a stronger notion of general signature schemes. And both of these are constructed in the work on constructing IO for Turing machines. So using these primitives, we are able to show that these two programs are computationally indistinguishable. In fact, if we use the KLW techniques directly, then that will give us an exponential number of hybrids. But we use some extra techniques to cut out the exponential number of hybrids to only polynomial number of hybrids. So once we go to program iterate prime, after that we can use the puncturable prf security to argue that the adversary does not learn anything about the prf evaluation at the string v star. So to conclude, we construct constrained prs for Turing machines based on IO and runway functions. And this directly gives us constructions such as unbounded broadcast encryption and unbounded identity-based non-interactive key exchange. So note that we already had constructions for these primitives before. But our work can be seen as something that unifies all of these together. And finally, we also show attribute-based encryption for Turing machines. Once again, we knew how to do function encryption for Turing machines, but this construction is much simpler than the FE construction. So with that, I conclude the talk. Many time for questions. And can you say a word about adaptive security? OK, so no. So our construction only gives us selective security. But yeah, for all of these applications, you actually, the direct constructions actually give you adaptive security. But by our construction, we only get selective security. That's a good question. Complexly leveraging helps to get adaptive security. Complexly leveraging, if you assume sub-exponential security of IO. In the case of unbounded inputs, we cannot do complexity leveraging. So I mean, that's the, yeah. More questions? OK, so thank you again.