 So, instead we're gonna hear about succinct, noninteractive arguments in relativized worlds, and this is Megan Chen presenting on behalf of her Alessandro Chiesa, Nicholas Spooner. And then I guess we'll try to get the video working for the second talk. Hello. Three minute warning. Okay, thank you. All right, good morning everyone. I'm Megan, and I'm from Boston University, and today I will tell you about snarks in relativized worlds. This is joint work with Alessandro Chiesa and Nicholas Spooner. So, the setting for a project is the following. Suppose we have a streaming computation, and we want to verify its correctness in a streaming fashion. So, given a function F, an initial computation state, Z0, and a final computation state, ZT, we wanna check that ZT is the correct output of iteratively applying F, T times to Z0. Since this is an NP computation, the verification statement is there exists intermediate states, Zi, they're the blue ones, and witnesses associated with each step, the blue Wi, such that at each time step, F applied to the state Zi, and witnesses Wi, outputs a correct new state, Zi plus one. So, one way to check the computation is to use a monolithic proof. We run a proof that takes as input the function F, Z0, and ZT, as well as all the witnesses, all the Zi's and Wi's for the entire computation. However, one entity, the prover, has to remember all the intermediate states, and also the witnesses. This requires the prover to have memory linear in T. Further, given a proof for a T-step computation, proving T plus one steps requires recomputing the entire proof. A better method is to verify the computation incrementally, or in a streaming fashion. Now we run a prover to prove the computation at every time step. Also, the prover checks that the proof produced in the previous step is valid. This approach is called incrementally verifiable computation, or IBC, and it was invented by Valiant in 2008. Further, a generalization of IBC is a primitive called proof carrying data, or PCD. For this, we generalize a path graph computation into a directed acyclic graph computation. For this talk, we'll focus on the IBC setting, but everything generalizes to PCD. Finally, IBC has many applications. One is for proving long, everlasting computations, such as succinct blockchains and verifiable delay functions. Another is when multiple provers work together to create a proof, such as for zero-knowledge cluster computing and verifiable image editing. So let's define IBC. On the top left, we have an IBC prover who takes as input a previous state z, a proof pi, and a witness w as inputs. With these inputs, the prover outputs a new state z prime and a new proof pi prime. Then this new state and proof pair becomes the inputs into the next prover. This is shown by the green arrows in the picture. Further, this can run many, many times. On the top right, we have the IBC verifier. At any time step, the verifier takes a current state z, a current proof pi, and outputs one if the entire computation so far is correct. For efficiency, we want that the proof size stays constant. We don't want the proofs to grow with the number of times we'd run the prover. Otherwise, the prover's input size and hence runtime will grow at each time step. IBC satisfies the standard completeness and proof of knowledge properties, but they aren't that important for this talk, so I won't define them. This leads us to the question, how do we instantiate IBC? The main way to instantiate IBC is using snarks. To see how this works, we'll zoom into what the IBC prover is doing. So the IBC prover runs the snark prover for a computation R, which I have as the gray box in the diagram. It's defined as follows. At every time step I, there is a witness wi such that the function f is computed correctly and the snark verifier accepts the old state and proof pair zi, pi i. Notice that the snark prover proves that the snark verifier accepts. Also here, the IBC proof is the snark proof. So let's go back to our question. How do we instantiate IBC? In particular, snarks have their own constructions and impossibility results and we should take a look at those. So there are two main approaches for building snarks for NP. The first is using snarks in the common reference string or CRS model. These require knowledge assumptions and I won't discuss these further in this talk. The second approach is to use snarks in the random oracle model. This means that both the snark prover and the verifier have access to a random oracle. In the picture, this is the yellow box with row in it. Here we write the snark security proof in the random oracle model, but when we use the snark in the real world, we have to instantiate the random oracle with a hash function that we believe is secure enough. In this paper, we focus on snarks in the random oracle model because we get properties such as transparent or universal setup. We also get efficiency improvements from avoiding expensive algebra. However, when constructing IBC, snarks in the random oracle model have the following issue. Both the prover and the verifier access the oracle, but the snark is built for verifying the correctness of non oracle computations. This is a problem because the prover needs to prove that the verifier accepts, but the verifier makes oracle queries. And it's unclear how to do this. So in prior work by Kieze, Odra, and Spooner, they get around this issue by heuristically instantiating the random oracle. So they use a concrete hash function such as SHA2. And in the picture, I have this as the red boxes. In other words, anytime the verifier makes a random oracle call, it actually just runs the SHA2 circuit. So there are theoretical and practical implications of doing this. First, let's talk theoretical issues. We're intentionally breaking the random oracle abstraction by instantiating the oracle. This means you don't actually get end-to-end security in any particular model. The snark is proven in the random oracle model and the incrementally verifiable computation is in the standard model where the hash function replaces the random oracle. Second, there may be hidden security flaws when we apply the heuristic step. This is true anytime we heuristically instantiate the random oracle. There are practical concerns as well. First, we lose flexibility in the ways that we can instantiate the random oracle. Here, the SHA2 circuit becomes part of the verifier's code. So we have to use a circuit implementation. This rules out other implementations, such as multi-party computation, or using a hardware token. Another concern is the efficiency of the snark. Specifically, snark's proving hash function circuits are really expensive. Recently, there are proposals for new hash functions that are more snark-friendly, but the community is still doing crypt analysis on them. Given these disadvantages of heuristically instantiating the random oracle when constructing IVC, my co-authors and I asked the following question. Is there an oracle model, capital O, such that there are snarks in this oracle model, and the snark can prove statements about the oracle? In particular, having an oracle model satisfying these two conditions means we can build IVC. We can have a snark prover that proves the correctness of its own verifier. So as a case study, what if the oracle were the random oracle? Does there exist snarks that prove statements about random oracle queries? The answer is no. The issue is that every random oracle query needs to be checked individually. So the verifier's runtime could be as long as the computation's runtime. So we won't have efficient verification. What we're looking for is an oracle model in which we can batch many queries to be verified together. So now to our results. Our paper defines an oracle model called the low-degree random oracle model or LDRO. I may also call it the low-degree oracle. This oracle model allows us to batch the verification of oracle queries and allows us to build snarks about oracle computations. So we build a snark that proves computations in the low-degree random oracle model and the snark also has access to the low-degree oracle. This is exactly for constructing IVC, which has the prover proving statements about its own verifier. So we construct a snark using two components. First, we have a, sorry. The first component checks the correctness of the non-oracle computation. Then the second component verifies the oracle queries. So for checking the correctness of non-oracle computations, we build a snark in the low-degree random oracle model. For verifying oracle queries, we construct a non-interactive query reduction scheme that lets us batch oracle queries in each step of the computation. And we can delay the verification of all the queries to a later time. This construction builds upon an interactive query reduction technique by Kalyan Ross from 2008. For the rest of the talk, I'll discuss the following. I'll define and discuss the low-degree random oracle. Second, I'll explain the ideas used for building our non-interactive query reduction scheme. As part of this, I'll review how Kalyan Ross's interactive method works and explain how we make it non-interactive. So as a stepping stone to defining the low-degree random oracle, let's start with the random oracle. For this talk, I'll define the random oracle to be some function from bit strings of M length to elements of some finite field F. As a visual example, I drew a 3D Boolean cube where it represents random oracle queries when M equals three. When you query any point in the random oracle, for example, 001, you get a random element Y in the field. And now I'll explain the low-degree random oracle. The low-degree random oracle is a low-degree extension of the random oracle to a finite field F. That is, the LDRO, notated as the Rohat in blue, is M-variate. It has individual degree at most D and is evaluated over F to the M. Further, the low-degree oracle satisfies the following properties. First, points in the Boolean hyper-cube agrees with the random oracle. Second, our oracle is low-degree. For example, the degree D is some constant. Third, algorithms with access to the low-degree oracle can query any point in F to the M, not just points in the Boolean hyper-cube. So what other properties do we want of the low-degree oracle? As a comparison, the standard random oracle has two nice properties when we do security analyses. It's simulatable, i.e., we can lazily sample the oracle's evaluation table, and it's possible to program it. A natural question is whether our oracle also satisfies these properties. For this talk, I'll discuss how simulation works. And oh, I forgot to say, our oracle satisfies both. And I'll discuss how simulation works and programmability uses many of the same ideas. So there is perfect, stateful simulation of low-degree random oracles, and the procedure works as follows. Given a query X, we wanna check if Y, the output of the oracle at X is already determined. So why might Y be already determined? Well, suppose we lazily sample the low-degree oracle's evaluation table. The evaluation table should also faithfully represent points on a low-degree polynomial. Hence, given a set of query answer pairs, the value of Y might already be determined, just because of the structure of the oracle. Luckily, there's a non-trivial polynomial time algorithm that checks if Y is determined, given previously seen oracle queries. This algorithm is called succinct constraint detection, and is by Benson's own at all from 2017. Then, using this algorithm, we can check if Y is determined, we'll use Y as the query answer for X. If not, then we can uniformly sample the value of Y from the field. So another question is, can we heroistically instantiate this oracle? Since the low-degree oracle has stateful simulation, we can use a trusted party or an MPC protocol to instantiate it. Another idea is the following. Benavis, Genaro, and Valis defined a pseudo-random function F, such that it's possible to evaluate the polynomial P written on the slide efficiently. So heuristically instantiate the low-degree oracle, we can obfuscate this polynomial P. Note that if F's outputs are indeed pseudo-random, then P will be a pseudo-random polynomial, just because it's defined by its pseudo-random coefficients. Finally, our last idea is to start with an existing strong hash function, such as SHA-2, then arithmatize it. Note that we can always compute a minimum degree extension of a hash function, but the resulting polynomial will have really high degree. So we'll consider the question of how to actually accomplish this instantiation in future work. So now I will show our non-interactive query reduction protocol for efficiently verifying low-degree random oracle queries. Remember, answering this question means we can build a snark for verifying low-degree random oracle computations. So step one, far first step, we'll recall Kalyan-Roz's interactive query reduction protocol. Step two is to adjust Kalyan-Roz's protocol to be snark-friendly. So first, what's query reduction? The goal is to verify n queries to a polynomial, to one query, sorry. The goal is to verify n queries to a polynomial, and one way to do this is to verify, sorry, to query the oracle at n points. But we wanna do better and only do it in a constant number of queries. Kalyan-Roz given an interactive proof that checks n points using only one point. Okay, so let's see how this protocol works. The prover and verifiers start by agreeing to some global set of distinct elements, b1 to bn. These can be chosen in advance and be defined as bi equals i. So b1 equals one, b2 equals two, bn equals n. Both the prover and verifier know the bi's, so they can generate a minimum degree polynomial g such that g of bi equals xi. Here, the xi's are the x's from the queries that we're checking. Next, the prover generates a polynomial f, which is defined as the low degree oracle row hat composed with a polynomial g. The prover sends f to the verifier. Note that f is a univariate and has degree n, the number of queries that we're checking times m, the arity of the low degree oracle, times d, the maximum individual degree of each variable of our oracle. Now, the verifier can check all in queries without querying the oracle at all. Instead, it checks that f of bi equals yi. This works because the oracle queries xi are the outputs of the polynomial g, and f is row hat composed with g. For soundness, the verifier checks that the prover constructed f correctly, that f is actually row hat composed with g. To do this, the verifier picks a random challenge point beta and applies the function g to it. Then it queries the oracle row hat at g of beta. The expected oracle response will be row hat of g of beta. Then the verifier can check that this value is the same as the value of evaluating f at beta. The soundness of the scheme is n times m times d over the size of the field. This follows directly from the Schwarz-Zippel lemma. Also, the communication complexity of the scheme is o of n times m times d because that's the number of field elements required to define the function f. Now let's consider how to make the scheme snark-friendly. One issue is that the verifier chooses a random beta after seeing the function f. We want this beta to be chosen ahead of time. In other words, we wanna de-randomize the verifier and make the protocol non-interactive. So to do this, we're going to apply the fiat-chimera transform in which the oracle is the low-degree oracle. For us, the prover queries low-degree oracle at the point g comma f to generate the fiat-chimera point. Intuitively, this is fine because we've embedded the random oracle in our low-degree oracle. Then we send this random point beta to the verifier. For soundness, the verifier also checks that beta is generated honestly. So it has to make another oracle query. The query will be g comma f and the oracle should respond with beta. So far, we've maintained the ability to reduce any number of queries to only two queries, but now the scheme is non-interactive plus the verifier is de-randomized. However, there's one other subtle issue. The query that the verifier makes scales with the number of inputs, specifically this g comma f query. Here, the polynomial specified by the queries x1 up to xn, meaning that its description size scales with the number of queries we're checking. This issue also applies to f because it's just row hat composed with g. So how can we fix it? We'll compress the query g comma f by hashing it. Then beta is row hat queried at the hash value. This means the verifier's check also changes. The verifier computes the hash value itself and queries the low degree oracle at the hash value. The compression property of hash functions ensures that the hash value is a fixed size. Security follows because hash functions are collision resistant. Note that this is the only part of our scheme that relies on a computational assumption, having a collision resistant hash function. So as a review, we've constructed a non-interactive scheme that batches the verification of low degree oracle queries, proving soundness of the scheme ends up being the technical bulk of the paper. Specifically, we have to show that the bad event written in blue on a slide doesn't happen. Proving this requires a new forking lemma for the low degree random oracle. In our paper, we prove that this forking lemma is as good as the standard forking lemma for random oracles. Finally, to conclude, I want to quickly overview some other results in our paper. First, we have a new proof of the Macaulay snark in the low degree random oracle model. Knowledge soundness is proved using the same forking lemma for the low degree random oracle, I said before. This is because the original Macaulay straight line extraction technique doesn't work in the low degree random oracle model. Further, we show that our snark for oracle computations, as well as our non-interactive query reduction protocols can be made zero knowledge. Further, once these two components are zero knowledge, our resulting snark in the low degree random oracle for oracle computations is zero knowledge. Thanks everyone, and I'm ready to take questions. Awesome, so maybe we can get the videos, start setting up the video while we take a question. Could you come up to the microphone for the folks on Zoom? With a G for your results, like impossibly results as well as positive results. Like do you need extraction? Yes, so snarks just have an additional proof of knowledge condition in addition to being a snark. Yeah, I mean for your IVC application for other applications, you do need the extraction for extraction feature, but do you need it for your constructions and impossibly to results as well, like do you need extraction crucially to make the argument go through? Yes, yeah. Thank you. Hi, thanks for the talk. I was wondering if you could quickly comment on the relationship of proof or efficiency, verify or efficiency to the actual degree parameter. So what should we think of the degree parameter as being like constant polynomial, right? How large should that be? The degree parameter can be a constant. Awesome, so thanks Megan for the talk. Thank you. And it looks like we're good on the video, so let's hear from you about noninteractive user knowledge proofs with fine-grained security. Hello, everyone, my name is Ryu Wang. I'm from UES DC. Our title is...