 Hello, and good morning. This is John Work with Kan Nauen and Poulton-Renzi. I'm Istihi. Today we shall discuss how black box redactions naturally lift to set up assumptions. We shall begin our talk with an overview of fully black box redactions. We're going to give a simple example for concreteness. Then we're going to describe beyond the standard model, introduce that is set up assumptions, and slowly one by one identify the issues with just trying to make the reduction work. Then we're going to state our result informally. We're going to define more detail, what we mean by set up assumptions, and identify the basic characteristics and necessary property of one. Then we're going to finalize our talk by digging into the proof and seeking the intuition in earth from this result. Let us begin the talk. Let us move to black box concentrations. Let there be two primitives, P and Q. We used to show that no adversary of some kind, let's say provisible normal time, can break P. And normally we assume that the adversary cannot break Q. Well, the bread and butter of cryptographers worldwide is a defining deduction R. And aim for a proof by contradiction. Namely, if the adversary can break P, the adversary, there is an adversary, the reduction R, that can break the instantiation of Q. We say that the reduction is a black box, has no internal access to the workings of the adversary, the primitive. What does that mean in practice though? Let's work over a concrete example. Pick lamp or on time signatures. The general idea is that the signer picks pairs of randomness. And they utilize only functions to effectively commit to that randomness without disclosing it. Essentially, the public key, let me say. Then a signature can be constructed by just revealing the corresponding first or second element of its pair for its bit of the message. And that's the full idea at one time. In practice we want efficiency, so the next idea comes to mind. It's okay, we have proven there's a deduction in the standard model about where we add random oracle. Well, we need to prove our reduction still works with the presence of the random oracle. And we don't have a case where our whole scheme is broken. So our main question is does the adversary break the signatures at the signature scheme we have? Well, now at the adversary and our instantiation of one-way functions, say we call def, have access to the random oracle. There are a few questions. Usually the action, let's say in a non-programmable case, provides direct access to the oracle, since it does the back and forth with the adversary, but does not actually interact with it. So say we have that covered. How does it handle the adversary now? Because in the standard model we proved security and we're discussing, as we said, probabilistic polynomial time adversaries. That's an example. Well, now the adversaries are all commasins. What do we sample over? How do we connect those different random variables that we have taken the probability over? And the astute in the crowd probably thought, OK, well, what if the adversary is inefficient and just wants to keep querying the random oracle? What happens? Well, a random oracle has an infinite state, so we have to account for that. And this also brings into question what, well, we now need to ensure that whatever reduction we have, the construction we're describing, right, do have issues with correctness. I will talk about all this. In summary, we need to reason about reductions that now have access to a random oracle. Under this new setup assumption, we need to establish correctness. We need to establish security. To do that, we need to describe how the reduction even handles the random oracle, or generally the setup assumption we have established. We need to describe how and over what we're going to sample. We now have oracle machines we need to sample over. For instance, the adversaries. As we said, an adversary may be perhaps unbounded, and thus may decide to keep querying the oracle to infinity and beyond. We saw that a fully black box reduction still leads to that setup assumption. Recall there is considerable work, as you can see on the slide, some of it, on the black box reduction hierarchies. Rheingold, Trevesan, and Vadan described the first fundamental abstract framework. On that later, Bachar, Bruskheim, Fieslund created a much finer hierarchy that we will review later. Hofkheims and Neuen introduced a concrete framework at reason about reductions upon which we built. Let us give some examples. Common setup assumptions in use are of course the random oracle model, the ideal cipher model, common reference string, or random string. You can have more exotic examples, such as the random beaker. Going back to the lampard example, let us add the random oracle for more efficient randomness. Now we have to consider the reduction rate for what we have proven from the standard model, and the question is what exactly happens. There are a few questions. At the end of the day, we need to show that that little devil adversary did not break our lampard signature scheme. Under the random oracle, our one-way function needs to query the oracle. The reduction we seek needs to handle our queries for the adversary. Note, we consider non-programmable reductions. That means the reduction does not interfere with the oracle. We leave open programmable reductions. That is, we aim for a fully black box, non-programmable reduction. We need to show that our scheme still is correct. A secure scheme that does nothing is kind of useless. It does giving access to the oracle interfere with correctness. How do we define correctness in past models? So that could be simply set membership. We do a more concrete treatment in our work. Finally, we do not sample over the same adversary from the standard model. Recall this an adversary that has access to an oracle. It's an oracle machine. We do have to consider its organization and its adversary. The probability with sample is different. And recall the adversary can be unbounded. It could just attempt to read the whole oracle state. We tackled all these problems. In particular, we show that the standard model has a lifting correspondence to any well-defined set-up assumption. In detail, for any parameters p and q and set-up assumption m, which satisfies certain properties, any fully black box non-programmable reduction from p to q naturally lifts to the set-up assumption m. Note that we build on top of Hofheins and Neuwin and not the framework of Rengel, Treves, and Vadan, which is more abstract. That leaves the question where do we stand in the fine hierarchy of Bacher, Bruska, and Fisley? As we said, we consider fully black box reductions. That means, in their hierarchy, the BBB kind, we leave open the question of the existence of lifting correspondences in the other kinds. How do we define a set-up assumption? We need a generic definition that encompasses all established models. We could consider set-up assumption as a construct that samples over all possible functions, say over a domain x and a domain y. We sample with some distribution, and we'll generally cover most oracles. However, we do have to consider unbound adversaries that could query the whole oracle state, and let us consider a random oracle that means an infinite state. That is, x could be infinite. To handle that, we break down x into a series of sets, and we progressively sample from there. To keep it simple, let us consider a single parameter l. We define a filtration xl over x. We create a sequence of sets that progressively cover x, and we prove that if the sampling is what we call consistent, practically that is, querying for more state does not alter the perceived prior distribution that you could have queried. That is, let's say you query n bits of information and you query n plus 1, and you try to forget the extra, you're not going to see a difference. If that is the case, we converge to the desired distribution. Note that when we say infinite, meaning it's countable, that is, x is still a discrete set. We do not go into the real numbers that is out of our scope. The parametric sampling that we described extends further and forms the crux of our proof. Now, what is the intuition we unearth with our proof? We already established that we are working with measurable space over a countable set which makes things more interesting and essentially forces us to prove convergence for all our distributions. And as we said, we do that through parameterizing axes. At the end of the day, though, we want to show that, let's say the standard model is on the left side and the standard model is on the right side. We have a primitive p and some instantiation of f of another primitive and an unbound adversary in the standard model breaks p, then it will break f. Now, that fact we claim and so is enough to show that any adversary, a different adversary with Oracle axes in the setup assumption is that it's able to break p, now it's able to break f that has access to all. That is our security proof now, samples over Oracle instantiations. We do sample over all adversaries that have access to the Oracle O. Thus, we prove that the reduction in the standard model can be used to bound the advantage of adversaries with access to a particular Oracle instantiation. Recall from the previous setup assumption slide that we need to parametrically allow axes progressively to the state of the Oracle. In the proof, we essentially define sequences of instantiation of f and also sequences of instantiations of the adversary, a. Now, we need to model sampling over Oracle instantiations. We'll just first order all Oracle instantiations and for that we're going to use the security parameter lambda. So for each value of security parameter lambda, we're going to create a row and for each particular Oracle instantiation as we increase the parameter of lambda, we get a path. If an adversary p breaks instantiation of a primitive p then that means that for some path after some security parameter value the probability of our construction being equal to one is non-negligible to the security parameter lambda. Let us call that path a red path. Thus, we divide our proof into two segments. First, we saw that a fully black box reduction in the standard model implies the existence of a correspondence, a reflective one, of expectation between each row of these paths we have described. That means again that if the expectation is non-negligible on the left then it is non-negligible, it's a non-negligible function on the right. The second step now that we saw is that we can use this expectation over all of the paths that define Oracle instantiations to bound the advantage of the adversary in the setup assumption. And thus, we have started with a reduction in the standard model and we have proven that there exists a reduction in the setup assumption. To sum up, we saw that any non-programmable fully black box reduction in the standard model lifts to setup assumptions that are well defined even if the adversaries are inefficient. We left open the question about the rest of the hierarchy of buggy, bruise, and fizzling. In particular, if we can extend this lifting correspondence to different kinds of reductions. Feel free to shoot us an email with any questions. We're always happy to discuss and chat. Thank you very much. Have a nice day.