 There we go. OK, thanks for your patience. Thanks for attending the session. This talk is essentially about black box constructions and about secure computation, as is the entire session. So I'll spend the first couple of minutes talking a little bit about those concepts. So a black box reduction. So basically, all of computer science is about reductions in some sense. So a reduction is just an if-then statement. If this thing has an algorithm, then this other task has an algorithm. That's your basic reduction. A black box reduction is one, so you're making an algorithm for the task y, and you're using an algorithm for the task x as a subroutine. So in a black box way, you only use the input output behavior of the algorithm for x. A non-black box reduction would be one that somehow uses the code of an algorithm for x. So since in Pategazzo and Rudich, their famous paper in 89, this was identified as a really fundamental abstraction boundary in computer science and cryptography in general. So since then, the main question has been, when do black box constructions exist? And this is an interesting question in its own right from a theoretical perspective. But the main motivation is that black box constructions are, they tend to be more practical. Did you lose me again? No, OK. Both in their efficiency and their obvious modularity. So this talk is also about secure computation. I won't give too many low-level details. They're not quite relevant for the purposes of this talk. So this is your basic compulsory one-slide introduction to secure computation. So you have multiple parties. They have inputs. They agree upon a computation that they want carried out. And everyone gets a particular output from that computation. And no one trusts anyone else. And there might be cheaters who are malicious. Despite that, we want to guarantee certain properties for all of the parties. Namely that you don't learn anything more than your prescribed output. That's the important one. But then also subtle ones like you choose your inputs independently and that the outputs are consistent among the parties. So we want to talk about both of these concepts together. And we want to talk about black box reductions in the field of secure computation. So in the world of secure computation, there a typical theorem statement will look something like this. So if there's some fundamental primitive like trapdoor functions, then in some security model, for every function that you want to compute, there is a secure protocol for evaluating that function. So if you think about how the quantifiers are arranged here, the final output of this theorem statement is a protocol. That protocol depends on the underlying cryptographic primitive. And it also depends on the functionality that you want to compute. So those are the ingredients that go into the protocol. So we want to see how the protocol uses its ingredients, whether that's in a black box way or non-black box way. Well, for the first arrow there, we know about how secure computation protocols can use their underlying cryptographic primitives in a black box way. There's been a long line of results. So we have some pretty good protocols finally that are really great. They use, say, trapdoor functions in a black box manner. But if we look at how most of these protocols go in terms of their usage of the functionality, the typical approach is you represent the function as a circuit, and then you evaluate the circuit gate by gate. At some level, that's how most of these things go. That's a fundamentally non-black box usage of the functionality F. So that's the question that this paper is trying to address. Can we make secure computation protocols that are black box in their usage of the functionality? Some of you may know there's another sense in which a protocol can be black box, and that's in its security reduction. That's not quite, that's neither here or there for this talk, but this is not the whole picture about black box reductions. OK, so let's try to define this a little bit more formally. This is the model introduced in the paper. So can we securely compute something without knowing the code of that thing? The paper presents a more general definition, and I'm only going to have time to talk about the special case of two-party secure function evaluation. That's maybe what most people think of anyway with secure computation. So I'm going to define it for a class of functions, first of all, and I'm going to define this thing called functionality black box. I'll use the acronym FBB. I'll try to avoid acronyms, but this is kind of a mouthful to say without an acronym. So a secure protocol that uses its functionality in a black box way is a pair of Oracle machines, such that when you instantiate those two Oracle machines, the protocol you get is a secure protocol for evaluating the function that Oracle realizes. OK, I'll have more to say about this. But one thing I want to make clear is that we can talk about secure protocols that use a trusted setup. And if we do, it really only makes sense to talk about a setup that's the same for every F in this class. If the setup can depend on the functionality that you're trying to evaluate, then why don't you just offload everything to the trusted setup? It becomes kind of a trivial question. So one important thing here that might look strange is that this is defined for an entire class of functions. So the cursive C here is a class of functions. We usually think of just securely evaluating a single function. But if we have a class that's just a single function, then these Oracle machines can have that function hard coded. So it doesn't really, I mean, they can satisfy this definition, but it's not really, you know, it's not meaningful because these Oracle machines could have the code of F sort of hard coded into them. So that's kind of not the interesting case. So it really only becomes interesting when we consider a large class of functionalities. And more generally, if, say, Alice and Bob can learn the code of a function just by querying it locally with their oracles, then why don't they just query their oracles, come up with the code of it, and then just run gmw, you know, compile it as a circuit. So those cases, they satisfy this definition, but they're not so interesting. We're interested in cases where the class is not learnable from Oracle queries. OK. So I just want to take a little detour and talk about a concept from complexity theory called auto-reducibility. And it's a way of measuring the structure, how much structure is in a set or a function. So there's a million different variants of the definition of auto-reducibility, but they all kind of look like this. So a set or a function is auto-reducible if you have an Oracle machine that computes the same function as its Oracle. That's kind of goofy. And to make things difficult, the Oracle machine can't simply query the Oracle on its input. So this is kind of a general template for auto-reducibility. You can think of this as an indicator of the structure of a set, because this means that, OK, I get an input x, and I have to output L of x, but I'm not allowed to query the L Oracle on x. So that information about the correct answer has to be encoded somewhere else in L. So that means there's some structure there. So just as an example, the discrete logarithm problem is auto-reducible. So here's an Oracle machine. This recursive call is kind of a call to its Oracle. So if you want to know the discrete log of x, you ask for the discrete log of x times the generator to the a. You should fact off a. In particular, this Oracle machine asks a query that's distributed independently of its input. That's a special kind of auto-reducibility. It's called instance-hiding auto-reducibility. This is kind of the flavor of things that we'll need in this talk. So just looking ahead, when can you evaluate a securely compute a function without knowing its code? Well, the answer is whenever there is a sufficient amount of structure, and the structure is something like this. OK? So let's do exactly that. So I'm going to describe a result from the paper. It's a characterization for security against semi-honest adversaries. So I'm going to define a newfangled kind of auto-reducibility that turns out to be the correct one for this problem. So it's called two-hiding auto-reducible because I'm not good at coming up with clever definition names. So we have an Oracle machine that computes the same function as its Oracle. That's kind of the same here. You may think this is weird. I've given this machine M2 oracles, which is only just kind of a convenience. So I can make a distinction between left Oracle queries and right Oracle queries. So I want to say that queries to the left Oracle don't depend on y, and queries to the right Oracle don't depend on x. So that's what these two left and right oracles mean. The don't depend on is in quotes, and I'm trying to give a high-level overview. It's kind of a natural notion of independence of distributions that's natural for a secure computation. Maybe you can figure out what it has to be as you go through the talk. So that's our definition. I'll just note that it's different than this plain instance hiding definition that I mentioned earlier. We define it for an entire class of functions, again. So you can see this is similar to this black box notion. It's defined for a class of functions. This auto-reducibility notion is defined for an entire class of functions. You have to have the same M that works for every F in that class. And we make a fundamental distinction between the input as a pair, the left and right input. So the main theorem here is that you can get secure computation without knowing the code of a function for an entire class of functions if and only if that class has this exact auto-reducibility property. So you can only avoid knowing the code of the function if there's enough structure. And what do I mean by structure? I mean exactly this. This is the kind of structure you need. And this is for security against semi-honest adversaries. And let me just point out that there's this OT hybrid. So if you don't know what that means, that's fine. This is OT is the most powerful trusted setup that you can give to a protocol. And so this says that arbitrarily powerful trusted setups don't really help for this problem. There's a lot of problems in secure computation where it's like, well, just throw a really powerful trusted setup at the problem. And everything becomes pretty easy. This is not one of those problems. So I find that kind of interesting. So when we use this theorem to prove impossibility results, they're going to be very strong impossibility results. They're going to rule out secure protocols no matter what the trusted setup is. OK, so I can actually, well, I thought I would be able to show the proof. We'll see how far I can get. So proof by pictures. So here's one direction. I'm given a secure protocol that has this functionality blackbox property. So what does that mean? Alice and Bob get inputs. They talk to each other. They talk to their local oracles. They might talk to the trusted setup, which I haven't drawn. And then at the end, they should output f of x, y, where f was the oracle that they were instantiated with. I want to use that protocol to construct this oracle machine from the auto-reducibility definition. How do I do that? Well, I'm going to define the oracle machine to just take in inputs x, y. It gives x to Alice, y to Bob. It runs their protocol internally. It simulates their protocol. And whenever Alice asks a query, m asks it as a left query. And whenever Bob locally asks the query, then m will ask it as a right query. And let's say we take the output of Alice to be the output of m. So the correctness of the protocol says that when you instantiate m with f oracles, you get f of x, y. That's one of the properties I need. The security of the protocol says that Alice's entire view, in particular, her queries to her oracle should be independent of y. And if you know secure computation, you know what doesn't depend on y has to mean at this point. Similarly, Bob's oracle queries, because they're part of his view, they can't depend on Alice's input more than on f of x, y, for instance. So that's one direction of the proof. That didn't take very long. That's good. Let's try the other direction. See how far we get. So I'm given an oracle machine from the definition of auto-reduceability. What does that mean? It takes in x, y. It queries its oracle. It makes left queries and right queries. And at the end, it outputs f of x, y. I want to use this oracle machine to construct a protocol that uses f as a black box. So Alice has x and Bob has y. And this is what they should do. So this is where I use the fact that I have a powerful trusted setup. OK? Using the trusted setup, Alice and Bob can achieve the effect of having a trusted third party who does this. I'll draw the trusted third party in here. So Alice and Bob give their inputs to the trusted third party, and he starts running m. At some point, m is going to make, let's say, a left oracle query. So the trusted party is going to say, hey, Alice, here's a query. Can you, you know, the trusted party doesn't have this f, but the Alice and Bob have f, because that's part of the setup here. So a left query, we ask Alice what it is. Alice is semi-honest. So we're talking about semi-honest security. So Alice will just respond, and that will take that as the response to the left oracle query. And yeah, the symmetric thing happens for Bob. The important thing is that this entire protocol, including the trusted third party, are the same for every f in the class. So the protocol treats f as a black box. It might not treat m as a black box. This protocol might use the code of m in a very non-trivial way. But it's the same m for every function in the class. So that's OK under this definition. Since Alice and, oh, OK, so yeah, m gives output. The trusted third party gives it to both Alice and Bob. That's what they output, finally. So as long as Alice and Bob are correct, we're feeding in the right things to m, so the output is going to be correct. The only thing Alice sees, so let's look at Alice's view in this interaction. Alice sees the final output. She sees the left oracle queries. Well, the left oracle queries don't depend on Bob's input. So they can be simulated. So we can have a simulator for a corrupt Alice and for a corrupt Bob symmetrically. So I have not very much time. So that's basically our characterization. We can use this characterization to show a positive result. So there's a class of functions that has this auto-reducibility property, which means you can evaluate these functions without knowing the code. And in fact, this class of function is not learnable from oracle queries. So it's infeasible to even know the code of these functions if you're just querying them. So this is one of those non-trivial examples. Unfortunately, it's a slightly contrived example. So this is a good opportunity for some interesting follow-up research. As a negative example, we'd like to do this for all functionalities, like a universal Turing machine compiler that somehow doesn't use the code of the Turing machine. That's not possible. In particular, you can't do this even with the class of pseudo-random functions. So think of Alice having the seed and Bob having the input. And because it's a negative example, it's even against semi-honest adversaries. You can't do this. Even if you have oblivious transfer, you can't do this. So I'm out of time. There's a characterization for malicious adversaries. It's not a complete characterization. But look in the paper for that. There's a more general definition than what I've given. There's an impossibility result for zero knowledge. So you can't prove something in zero knowledge where the statement is related to one-way functions if you want that to be not using the code of the one-way function. So just as a summary, so I'm done now. So I gave a definition for secure computation that doesn't use the code of the functionality that it's realizing. There's a complete characterization based on auto-reducibility. So the title of the talk was a question. The answer is, well, no in general. But yes, in some interesting cases, you can evaluate F without knowing the code of F. And I've gone past my time. So thanks for your attention. Thank you. Great. Yeah.