 Hello, I'm Peter Dixon from Iowa State University, and I'm going to be explaining our results on perfect zero knowledge. So perfect zero knowledge is an interactive problem. There's a prover and a probabilistic polynomial time verifier, and the prover is trying to get the verifier to accept the input. It's got the standard interactive conditions. The prover can't trick the verifier into accepting, at least not very often. And the prover can usually convince the verifier to accept when it's supposed to accept. And then it also has this zero knowledge condition. There's a simulator that can mimic the interaction between the prover and the verifier on yes instances. The idea is that the verifier can't tell the verifier any new information in a sense. And it's perfect zero knowledge because we require this to exactly match the distribution compared to szk statistical zero knowledge where the simulator has to be really, really close but doesn't have to be exactly the same. And we're also going to talk about the non-interactive version of perfect zero knowledge where the prover gets to send a single message to the verifier, and that's it. So how do perfect zero knowledge and non-interactive perfect zero knowledge fit into other complexity classes? Obviously, NIVZK is in PZK, which is in SBP because you just relax the simulator simulation condition. SZK is known to be in AM and also CoAM because it's closed under complement. PZK was recently shown to be in PP, but there's an oracle separation between SZK and PP. So our question is, can we get a tighter bound? Can we put NIPZK or even PZK into something below these two? So first we need some class below these two. So we're going to look at SBP, Small Bounded Error Probability. This is kind of halfway in between BPP and PP. BPP requires a two-thirds chance of accepting yes instances and a one-third chance of accepting no instances. Here we've relaxed that to some exponential number, but it's different from PP because we need a constant ratio between these two, whereas PP you could like have one-half plus epsilon, one-half minus epsilon. Here you need this like two-to-one ratio. So SBP contains MA. You can just guess the witness and then check it because it's a randomized class. And it's contained in AM using the LARGENESS protocol. You ask, is the set of random strings at least this big or at most this big? And obviously it's contained in PP because you just relaxed the two-to-one ratio part. And what we managed to prove is that NIPZK is contained in co-SBP, improving the upper bound on NIPZK to AM intersect co-SBP. We proved this by looking at the complete problem for NIPZK given in Liora Malka's 2008 paper. He defined this problem called uniform, where you get a distribution and quick aside, when I say distribution, I mean a circuit that given a random input, like if the input is uniformly distributed, the circuit output matches this distribution. So I'm going to use the distribution in the circuit pretty much interchangeably. So in uniform, you're given a distribution on N plus one bit strings and you need to accept if the first N bits are uniform, like if you kind of cover up the last bit, you would get the uniform distribution. Also the last bit is one at least two-thirds of the time. So if both these conditions hold, you have to accept. If the support of the distribution has at most two to the N over three strings ending in one, so it's small, you have to reject. And it's important to keep in mind that this doesn't say anything about the probability of getting a string ending in one. You could get a string that ends in one every single time, as long as it's not that many different strings. And we showed that you can do this problem in co-SBP. So we need to accept with some tiny probability if this set is small, and we need to accept with really tiny probability if the first N bits are uniform and the last bit is usually one. Our algorithm to do this, first we take two samples, we compare the first N bits. If they match exactly, it's probably not uniform, because the uniform distribution has the lowest collision chance, so we'll accept. Otherwise we'll take this many more samples. If every single one of them ends in a zero, it's probably not usually one on the last bit, so we'll accept. If both these tests fail, we reject. And so here's the technical result that shows why this works. If there's very few strings ending in one, we get this relationship between the probability of getting a string ending in one and the distance on the first N bits from uniform. You get this trade-off. If you're more likely to get a string that ends in one, you're further from uniform, which means basically one of these tests is likely to succeed, and so the proof is pretty straightforward. Here's the statistical distance from uniform. That's at least, we'll ignore things that don't end in one. The chance, basically the statistical distance is the set with the largest distance, so it's at least when we take that set to be the set of strings ending in one, and over here we know that this probability is at least one-third plus X. We know that it's bigger than this because this is the most size of T over 2 to the N, which is at most one-third, so we've got one-third plus X minus one-third, which is X. Alright, so if the support has at most 2 to the N over 3 strings that end in one, we know that it's at least one-sixth from unicorn, from uniform, or the chance that it ends in one is at most half. That's if you just put one-sixth in for X here, one-half or one-sixth. If it's at least one-sixth from uniform, a standard collision probability result is that the collision chance is at least this. On the other hand, if the chance of getting a zero is at most half, the chance of getting all zeros is at least half, so the chance of getting zero is at least half, so the chance of getting all zeros is at least six over five times 2 to the N. That's why I picked this number. So on a yes instance, our accept probability is at least six over five times 2 to the N. Now we'll look at no instances. We know the first N bits are uniform and the last bit is one pretty often. So the chance that we get a collision on the first N bits is one over 2 to the N, because they're uniform. The chance of the last bit being all zeros is at most one-third to this power, which for big enough N is bounded by this. So our chance of accepting a no instance is at most 11 over 10 times 2 to the N, this plus this. And so the ratio between the two is 12 over 11, which is a constant, so we have an SBP algorithm for uniform. So NIPZK is contained in co-SBP. So can we push this any further? Can we get PZK in co-SBP? Or could we get NIPZK in SBP? And we give oracle results that show that this problem is too hard for me. Though one of the reviewers pointed out that this was actually, you can actually derive this from Scott Aronson paper based on the permutation testing problem, not being in SBQP, the quantum version of SBP. So that's actually even stronger than what we showed here. So here's an image show that summarizes our results. The solid lines are contained into the dashed lines or relativized separations. The red stuff is consequences of our results. Here's our containment result. Here's our separation result. And then all of these other separations fall out. So our first oracle separation, this is the one that was obsolete by permutation testing, preemptively obsolete I should say, we showed that this problem, uniform or small, is not an SBP. We show that if, so the problem is we've got a distribution, we need to accept it if it's the uniform distribution and reject it if it has a small support. It's the simplified version of the complete problem for NIPZK. It's still an NIPZK, but it's not an SBP relative to an oracle, so we get this oracle separation. For the other one, we look at this problem disjoint or identical. We have two distributions. If they have completely disjoint supports, we need to accept. If they have completely identical supports, we need to reject. You can think of this as a version of graph non-isomorphism. If you take the graphs and randomly permute them, you get two distributions on graphs. If the graphs are non-isomorphic, then there's no overlapping permutations, so you get disjoint distributions. If the graphs are isomorphic, the distributions will be exactly the same. This problem is in co-PCK for the same reason that graph non-isomorphism is, because graph isomorphism has a PZK protocol, but it's not in SBP. We have that co-PCK is not an SBP, which shows PZK is not in co-SPP relative to the oracle. At a high level, what we do is we show that you can't solve these problems in SBP with strictly sampling techniques. With a polynomial number of samples, you can't get that two to one probability ratio. Then we use the oracle to hide all of the things you can do that aren't sampling. We use the same circuit for every instance of the problem. The algorithm can't learn anything from the circuit. We randomize which instances we use so that the algorithm can't cheat. Then we do some diagonalization to make one oracle that beats every algorithm instead of one oracle that beats one algorithm. I'm going to talk about how we hide all the information, and then I'm going to talk about why the sampling doesn't work. We'll take an ordered multi-set of strings, and we're going to make our oracle query the set. It'll return one if the i-th bit of the j-th string in the set is one, and it'll return zero if the i-th bit of the j-th string is zero. We're going to build a circuit that gets x and outputs the x-th string in s by just making n queries to the oracle and outputting the bits of the x-th string. Using this, we can make one circuit that we can plug different oracles into and get completely different problem instances. The algorithm can't rely on looking at the circuit for any information. We also have to deal with the algorithm kind of cheating, like if we use a fixed oracle. Let's say the oracle has the first string being all ones, and let's say we're doing disjoint or identical. If the algorithm kind of knows which things to query beforehand, it could just use those and it'll beat that oracle. It would be a terrible algorithm for general purposes because most instances this doesn't work, but it could beat this instance. To guarantee that, well, to make the proof go through, we randomize which instance we pick and show that over a randomly chosen instance, the algorithm has a bad accept to reject ratio, so there has to be at least one instance where the algorithm has a bad accept to reject ratio. The algorithm is basically stuck with random sampling at this point. Why random sampling doesn't work, we'll be talking about disjoint or identical here, but it's a similar proof for uniform or small. We've got a set of samples, and we have to use the samples as the basis for our decision to accept or reject. Of course, if we have a collision, if there's any overlap between the sample sets, we know that the distributions are identical, we can just reject, but if there's no overlap, the samples are only slightly more likely to come from disjoint distributions. A little bit more precisely, we set it up so that there's a yes instance and a no instance where conditioned on getting disjoint samples from the no instance, you have the exact same chance of getting any sample set from the yes and no instances. So let's pretend for now that the chance of getting a collision is 1 over 2 to the n, and that the algorithm's behavior on any disjoint sample set is just to reject half the time except half the time. And we'll see once we get to the end that it doesn't matter what the algorithm does, even if it looks at the sample sets, all that matters is what this chance is, and that this chance has to be like a constant to get SPP behavior. Alright, so the chance of accepting the no instance is the chance of getting a part, like we'll sum over all of the sample sets, what's the chance that we get the sample set, what's the chance that we accept the sample set. We're assuming for now, well, okay, so we break it into disjoint and not disjoint cases. If they're disjoint, this is one half. If they're not disjoint, we always reject, so it's zero. So this kind of disappears, but the chance of getting S1, S2 from the no instance, because this conditional probability thing is the chance of no collision times the chance of getting it from the yes instance. So we can replace get S1, S2 from no instance with get S1, S2 from yes instance times chance of no collision. We can pull this out, and then probability of getting S1, S2 from the yes instance times one half is the probability of accepting the yes instance. And that's why this one half kind of disappears. So the chance of accepting the no instance is the chance of accepting the yes instance times the chance of not getting a collision. So to get SPP, we need a constant ratio between these two. So this has to be one minus a constant. And it's not so there's no sampling based SPP algorithm for this problem. So I already kind of mentioned, but it's a pretty similar argument to show that uniform or small is not an SPP. Again you get like a sampling based algorithm, can't distinguish if the support size is small. And then we also get the following corollaries that PZK is not equal to NIPZK relative to some oracle. That's because this is in co-SVP and this is not relative to an oracle. And you get that NIPZK is not in co-PZK relative to an oracle. You can also use these results to give an alternate proof that NIPZK and PZK aren't closed under complement. This was shown in this paper. And you can also use it for an alternate proof that SPP is not closed under complement which was shown here. And that's all I got. Thank you.