 So, let me start with the broad world for this talk. So we are interested in developing generic mechanisms to improve the efficiency of indistinguishability of a skation. And this is basically motivated by the fact that by now we have a lot of theoretical applications of IO and the efficiency of all of those applications basically depend upon the efficiency of IO and therefore it motivates the goal of improving the efficiency of IO in a generic manner. So let me start with a very, very brief overview of the main lines of research in indistinguishability of a skation. And one of the primary lines of research here is about constructing indistinguishability of a skation for general circuits. And by now there's a lot of work in this area where the main motivation is to get better and better security based on weaker and weaker assumptions. So while this is still a very active line of research, a parallel line of research which has been going on is about improving the efficiency of IO. And here I will just focus on the line of research which concentrates on offer skating more efficient representations of programs, for example, Turing machines. And for example, starting with the work of Betansky et al. by now we have some schemes that show how to offer skate Turing machines directly without the overhead of first transforming them to circuits by only relying upon IO for circuits. And one of the main advantages of these works is that they achieve per input running time as opposed to the worst case running time on inputs that is inherent in the circuit model. One caveat of these works is that the correctness only holds for inputs of some a priori bounded length. And while in this talk I'll only focus on IO for Turing machines, let me mention that there are also lots of important works that extend these results further to the regime of offer skating RAM programs, but here I'll only talk about Turing machines. And in particular, our goal here is to construct offer skating of Turing machines with better efficiency. So let me actually elaborate what I mean by better efficiency. So in particular, we are interested in two problems. The first problem is about size efficiency. So we are interested in comparing the size of the offer skated program relative to the size of the unofficated program. So if we look at the results so far, they incur polynomial overhead in the size of the underlying program. That is the size of the offer skating of some program M is polynomial in the security parameter, the size of the program M, and some parameter L which denotes the upper bound on the input length. And what we want is constant overhead. That is, we want that the size of the offer skated program should only be constant times the size of the underlying program M, and some additive overhead in the, additive polynomial overhead in the security parameter, and again the input length bound L. And while we are studying this question for the case of Turing machines, this question is already quite interesting for the case of circuits. And in that case, the work of Bitansky and Bakunathan actually showed how to resolve this problem. The second problem that we are interested in is that of amortization. So let me explain what I mean. So let's say that we are given an indistinguishability offer skater for circuits of some fixed a priori fixed size, okay? And now somebody comes and tells us that they want to offer skate some polynomial number of Turing machines where this polynomial could be arbitrary, okay? So the question is, how many invocations of the underlying indistinguishability offer skater do I need to make in order to offer skate all of these Turing machines? In particular, can we offer skate all of them with less than N, invocations of the underlying indistinguishability offer skater? And ideally, we want to go as small as possible, okay? So that's the question of amortization, okay? So before stating our results, let me mention that if you actually pause for a moment and think about both of these questions, they will be, you'll see that they are quite easy to realize if you were given IO for Turing machines that supports inputs of unbounded length, okay? So you can verify this on your own. However, presently such IO schemes are not known from IO for circuits. Basically, the only way we know how to achieve IO for Turing machines with unbounded input length are either by using the stronger notion of differing inputs of a skation or weaker version of it called public coin differing inputs of a skation or another notion called output compressing randomized encodings, but none of these primitives are known to be realizable from indistinguishability of a skation. And our goal is to achieve both of these problems, solve both of these problems by using IO for circuits, okay? So that brings me to our result statement. So our first result is IO for Turing machines with constant multiplicative overhead where the constant is simply two. The assumptions are sub-exponentially secure IO for circuits and rerundimizable encryption scheme. The second result is that we achieve IO for Turing machines with amortization. In fact, the best possible amortization where if we want to obfuscate some polynomial number of Turing machines, then we can obfuscate them all by only making one invocation to an IO for circuits whose size is a priori fixed. And the size of the underlying circuit family only depends on the security parameter and the input length bound L that we assume for all the Turing machines. And the assumptions in this result are the same as the previous one. Okay, so let me now go over some of our techniques. And in this talk, I'll primarily focus on the first result that is achieving IO with the constant overhead. So towards that, let's first briefly recap how do we build IO for Turing machines presently? What is the current template that is followed in existing works? So these works basically use two ingredients to build IO for Turing machines. The first ingredient is the IO for circuits, a general purpose indistinguishable dofus kitter. And the second ingredient is a randomized encoding for Turing machines. And this is really the hard part of these constructions. I'm not going to discuss how these things are constructed. But if we are given both of these ingredients, this is how you can obfuscate Turing machines. So the idea that follows, suppose we want to obfuscate a Turing machine M. Then we take indistinguishability obfuscation for the following circuit. This circuit has the machine M hardwired inside it, together with a PRF key, okay? Upon receiving an input X for the machine M, this indistinguishability of a skitter internally computes a randomized encoding of the machine M together with the input X. And the randomness that it uses for this computation is derived from the PRF. And then using this randomness, it computes and outputs a fresh randomized encoding of the machine M together with the input X. And the decoding algorithm is public. So upon receiving this, any evaluator can simply decode and obtain the output M of X, okay? So that's the current template. So I want to now point out the main bottlenecks that this template presents towards achieving the first goal, which is a constant overhead, okay? So if you look at this template, you'll notice that the first bottleneck is that since machine M is already embedded inside the circuit, then if we want to achieve constant overhead, we already need to start with a circuit of a skation scheme which achieves constant overhead, okay? But an even bigger bottleneck is that because the encoding algorithm of the randomized encoding is also embedded inside the circuit, we in fact need even stronger property, right? We require that the randomized encoding scheme is such that that it's running time only incurs a constant overhead in the size of the Turing machine M. And this is an extremely hard problem to solve. In fact, even for classical primitives such as encryption schemes, we only know how to achieve constant overhead in running times, only using non-standard assumptions, okay? So these are kind of the main bottlenecks that one faces when trying to extend this template towards achieving our goals. And let me mention one more point that if you look at this template, there's some redundancy here, right? The machine M is basically encoded every time you want to evaluate it on some input, right? It would be nice if you could just bring out the machine M outside of this obfuscation and then encode it only once. And this is kind of what we do and it turns out this is in fact the key to achieving our goals, okay? So let me now present our new template which is kind of the main point of this work, a new template for obfuscating Turing machines, which also turns out to be very useful for achieving both of our goals, okay? So the new template is as follows. In order to obfuscate a Turing machine M, we basically use two ingredients. The first ingredient is Iofa circuits, which is as before. The second one, I put a question mark there because I want to basically derive what are the properties that we would want from the second ingredient, okay? So let's first focus on Iofa circuits. So we use Iofa circuits to basically obfuscate some kind of input encoder, okay, which just takes some input X for the machine M and outputs some encoding of this input, okay? And in particular, this input encoder is independent of the Turing machine M, okay? So therefore, the Iofa circuits does not operate on the machine M at all, okay? And now, if we look together at this encoding of X and the encoding of the machine, we want this to basically represent a randomized encoding, okay? And because they're encoded separately, this is basically a decomposable randomized encoding, okay? Moreover, since the machine M is only encoded once, okay? Overall, therefore, we want this randomized encoding to have a reusability property, right? That it should be possible to reuse the encoding of M many times for evaluating different inputs, okay? And finally, we want that we should be able to construct such a reusable randomized encoding without using Iofa, right? That's kind of the point. And in order to, for example, achieve our first goal of constant overhead, we would want to construct such a reusable decomposable RE with constant overhead, okay? So that turns out to be the key technical contribution of the work. So in order to go there, let me first elaborate on this notion, the reusable decomposable RE, okay? As probably you're familiar, when working with IO, we always need to typically modify the notions to make them IO friendly, okay? And this is basically our first step to formalize a notion of reusable decomposable RE, which is friendly towards IO, facilitates security proof, okay? And this is what we call Oblivious Evaluation Encodings. So to explain this notion, let's look at the setting where we have Alice with input X and some bit B. And we have Bob who has two machines, M0 and M1. And we want that somehow, given some encodings, an evaluator E should be able to compute M sub B of X, okay? And we want that the evaluator should not be able to learn which of the two machines was used to actually compute the output C, okay? So the notion of Oblivious Evaluation Encodings basically resolves this problem where we have some kind of setup which is used to generate some secret key. And the secret keys are given to Alice and Bob. Using the secret key, Alice can encode the input X together with the bit B. And Bob can encode the machines M0 and M1. And now when the evaluator receives both of these encodings, then he can run some decoding operation to learn MB of X, okay? So here is basically the syntax which just states a bit more precisely what I just said. There's a setup algorithm which outputs a secret key, a tuning machine encoding algorithm, an input encoding algorithm, and then finally the decoding algorithm, okay? So this so far is very similar to a standard reusable randomized encoding scheme, except that we are encoding two machines instead of one, and we are encoding a bit together with the input, right? So so far it's not really much different from a reusable randomized encoding. The main thing that separates this notion from the existing notion is really these two algorithms that I'll mention. But before that, let me just say that the notion of constant overhead in this setting basically means that if you look at the Turing machine encoding, then you want that the size of this Turing machine encoding is simply constant times the size of the machine's M0 and M1 plus some polynomial in the security parameter, okay? So here are the two algorithms that I was alluding to. The first algorithm is an algorithm that allows you to compute punctured keys, okay? And the keys can be punctured on any input, any point X in the input space, okay? And correctness says that if you give me a key that is punctured at point X, then you can still use it to encode any other point in the space, okay? With respect to bit 0 or bit 1, okay? Both of them, actually. And security property basically says that even if you are given this punctured key, and then if I give you an encoding on that punctured point X with respect to bit 0 or an encoding of that point X with respect to bit 1, you cannot decide which one was given to you, okay? And even if I give you some extra information, let's ignore that for now. The second auxiliary algorithm is basically puncturing on the bit instead of on the input, okay? So the key can now also be punctured on the bit. And correctness will say that if I puncture, if I give you a key that is punctured at bit B, then you can use it to encode any input with respect to bit 1 minus V, okay? And security property will say that if I give you, for example, the key punctured at bit 1, then you cannot distinguish if I give you an encoding of M0, M0, or M0, M1, okay? So because the key that is punctured at bit 1 will not allow you to compute the output with respect to the second machines, M0 and M1, therefore you cannot distinguish, okay? So given that notion, let's just assume for now that we know how to construct it, and let's just see how we can use oblivious evaluation encodings to construct IO4 Turing machines with constant overhead. So the construction is quite simple. Let's say we want to offer a skater Turing machine M, then we'll have two ingredients. The first one will be an IO4 circuits, where the circuit basically takes any input x and computes an encoding of that input with respect to the OE scheme. And the second ingredient is simply Turing machine encoding of the machine M just repeated twice, okay? And the first component is independent of the machine M, and the second component since we start from a scheme which has constant overhead, therefore the resulting scheme will also have constant overhead. So pictorially it looks like this, we start with some input x, and then compute an encoding of x with respect to the bit 0. And then we take these two encodings together, we decode it, and we get M of x, okay? And very, very quickly, the way we do the security proof is as follows. So we want to argue indistinguishability of a fiscation of M0 from a fiscation of M1. So the first step is to simply switch from encoding M0, M0 to M0, M1. And here we use the bit puncturing key. Then we do the standard IO gymnastics, which people are familiar with by now, namely positional IO techniques from the works of Gentry et al. Where we switch from computing on M0 to computing on M1, one input at a time. And here we use the input puncturing key. At the end, only M1 is being used for computation. So now we can switch from M0, M1 to M1, M1, again using the bit puncturing key. So I do not expect you to understand all this proof, but basically the point was just to show that the notion OE was pretty much tailored to work with IO, to facilitate the security proof with IO. It was a natural extension of the notion of reusable randomized encoding to facilitate the security proof. And now let me just briefly go over how we construct an oblivious evaluation encoding scheme with the constant overhead. So we follow a two-step approach. The first step is to construct attribute-based encryption for Turing machines with constant overhead. And the second step is to then compile it into an OE scheme while preserving the efficiency properties. And in fact, the attribute-based encryption scheme that we need is only single key. So we only need security for a single attribute key. And in this talk, I'll just skip the second step, but I'll just talk briefly about the first step of constructing AB scheme for Turing machines. So here is a quick recap of what is an attribute-based encryption scheme. There is some setup algorithm which generates a public key, a master secret key, the encryptor can encrypt using the public key, any message X together with any label X together with some secret message, and then the other entity, Bob, can use the master secret key to compute a secret key tied to some machine M. And then given the secret key and the ciphertext, the evaluator can learn the secret message, only if the evaluation of the machine M on input X is one. And again, the notion of constant overhead here can be suitably defined. Okay, so in order to construct an attribute-based encryption scheme for Turing machines with constant overhead, our starting point is really this work of Coppola et al, which has driven a lot of subsequent works in this area. And I'll just talk about one of their main results, which is constructing message-hiding encodings. If you're not familiar with this notion of message-hiding encodings, you can just think of them as very similar to randomized encodings, except that here we encode the machine M, a label X, and a secret message. And the evaluator learns the message only if M of X is equal to one. So at a high level, this is what their construction looks like. So there is some work-tip, which is initialized with the input. And we construct some storage tree on top of this work-tip using something called position accumulators. You don't need to know what exactly it is. And then once we have the root, then we compute a signature on it, again, using some IO-friendly notion of signatures. There is a second component to this construction, which is an obfuscated next message function of the Turing machine M that we wanted to, for which we wanted to compute an MHE. And this obfuscated next message function also has the message inside it, together with a key pair, the key pair that is used to compute this spreadable signature on the root. And finally, there is also a counter which maintains the current state, and it's initialized to the zero state. And the evaluation works in a very natural way. You start with the first memory location, you read it. You take the path from the leaf to the root, the signature on the root, and then feed it to the obfuscated program. The obfuscated program will verify everything, compute the next step, and then compute the new root and output it, together with the new signature. And then the evaluator can update this entire storage tree on its own, and then continue this computation on and on. And finally, at some point, if it hits the accept state, then this obfuscated program should output the secret message. Okay, and the security proof for this construction is, again, follows a pattern which people who work in IOL literature are familiar with, where you have some sets of hybrids. And in the ISETS set of the hybrids, the computation of the machine M of X is authenticated at the ISETS step of the computation of machine M is authenticated. And nothing else can be verified by the obfuscated program. And just keep that in mind when I mentioned the main issue in extending this idea to the A-B setting. And basically, we have two challenges here. The first challenge is that in machine hiding and coding, the machine and the input X are encoded together. Whereas in an attribute-based encryption scheme, by definition, we are required to encode them separately. The second issue is that, again, in a machine hiding and coding, the encoding cannot be reused. It's a one-time system. Whereas in an attribute-based encryption scheme, it's a reusable system. So the attribute keys for a machine can be reused. So handling the first issue of decomposibility actually turns out to be easy. There is already a natural separation in the construction of Coppola et al in that the input X is encoded separately and the machine M is encoded separately. The only weird thing is that the message is actually encoded with the machine. Whereas what we would want is to encode the message together with the input X, right? So this is kind of easy to do. Let's just flip the roles of the machine and the input by using a universal Turing machine. And now the work table will be initialized with the Turing machine M, and it will correspond to our A-B key. And the obfuscated program, which has the input and the message will correspond to the A-B ciphertext, okay? So decomposibility was easy to deal with. The main challenge actually turns out to be achieving reusability. And I won't have time to explain too much, but really, the main point is that in the proof of their construction, the point is that at some point, you have to do puncturing, right? Which is kind of the common step in all IO proofs. And the way puncturing is done in their proof is that the verification key for the signature scheme is punctured in such a manner that it only authenticates the ith step of the computation of M of X and nothing else, okay? And this is okay if you are computing just on one input, right? However, in our case, we want to compute on multiple inputs, right? So once you have punctured the verification key with respect to a single computation, it becomes incompatible with other computations. And therefore, it doesn't quite work anymore. And to resolve this problem, we introduce an idea called signature synchronization, and this is just a quick pictorial presentation of it. Where basically what we do is that in each AB ciphertext, we actually use fresh signature keys. So in every ciphertext, we use a fresh key pair for the splitable signature scheme. Whereas the AB key actually uses some fixed master signing key to sign the route. And now in order to ensure correctness, in each of the AB ciphertext, we also provide something that we call the translator, which basically transforms signatures with respect to the master signing key into signatures with respect to the key pair that's embedded inside the AB ciphertext. And I won't mention how it is actually implemented, but this is really the heart of the construction. And finally, how do we achieve constant overhead? That actually turns out to be easy once we have done all this. Basically, the main issue here is the AB key, right? And the AB key consists of this work tape and the storage tree, right? And of course, this as such might not have constant overhead. And to address this point, we just observed that the evaluator actually does not need to store the storage tree. He can just compute the storage tree on its own, right? And therefore, we can just delete the entire storage tree, right? We just have the description of the machine M and the signature on it, right? And this is our new AB key, and this can have constant overhead. And that's actually it.