 And the next talk is about pseudo-random functions and almost constant depths from low-noise LPN, and Yu-Yu is going to give the talk. Okay. Thank you. So, this is John, talk with John, and by the way, this is John's Chinese name. Okay. This is the outline of this talk. I will briefly introduce LPN, and then we show how to build a PRG with super polynomial strength in constant depths, followed by the final result, which is a PRF in almost constant depths. Okay. We all know that the LPN is about solving a system of linear equations in presence of noise. So, in this case, every bit is independent noise characterized by the Bernoulli distribution with noise rate mu less than half. Actually, we have two versions here. The search version says that you are given the matrix A, which is essentially the coefficients randomly chosen and coefficients, and the Y to find out X. And the other decision version is to distinguish Y from uniform randomness given the matrix. And these two problems are proved to be polynomial equivalent. And if you look at this A, X, E, these A and X, they are uniformly chosen. And this E, they are from the Bernoulli noise distribution. And in fact, we can draw X from the noise distribution as well. So that means both the secrets, A, X, and X can be from the noise distribution, okay? And this seems to be well known, but we provide a proof in the appendix, okay? And this problem, LPN, is believed to be hard. First it represents well-known decoding random linear, binary linear code. So it is NP-hard. And in the average case, for constant noise rate mu less than half, the best attack needs like this many, I mean, it's almost exponential up to a log factor, asymptotically. And we can also play with the trade-off between the running time and the query complexity. But this work will be focused on the low-noise LPN. And in this case, this noise rate is n to the minus constant. And so low-noise LPN is believed to be a stronger assumption than the constant noise LPN, because when you have low noise, it only makes the cryptanalysis easier. But we don't have a separation, we just believe to be a stronger assumption. So the best attacks need complexity like 2 to the power of n to 1 minus c. And the quantum computers are not known to have any advantages in solving LPN than the classic ones. So related work, we start with the PKE. And in general, usually we construct the PKE from the stronger version, the low-noise LPN, but more recently we made some progress on this topic. We constructed the first PKE from constant noise LPN with sufficient hardness. And it's going to appear in this year's crypto. And for symmetric cryptography, we know how to construct PRGs, authentication schemes, and binding string commitment schemes. And in this work, we are going to construct a PIF, I mean a parallelizable PIF. It's not a parallelizable PIF in very short, very small constant from the low-noise version of LPN. So the main result is that we first show we can do a polynomial string PRG in constant dApps. Here this AC0 mode 2 refers to the class of polynomial-sized constant dApp circuits with all those unbounded fine gates. And then we present PRF in almost constant dApps. So for any dApps, that is super constant. I mean any dApps such as log log n or even s. And this complement with the negative result that the PRFs with more than a quonsie polynomial security do not exist in AC0 mode 2. And there are some additional features about our PRGs and PRFs. And they have a secret key or seed with only sublinear entropy. But it has comparable security to the underlying LPN on secret size, linear secret size. And this can be understood by the fact that in low-noise LPN, the secret is kind of very sparse. And the homing weight is very sparse. So we can use very small entropy to sample, sublinear entropy to sample a secret of linear side secret. So if we normalize, I mean represented in the function of the key size lambda, then we get a PRF of security up to the power of lambda over log lambda. So which is essentially the same secret level as the constant noise therapy. And we have some technical tools like how to extract us to extract by noise distributions and we use the preserving reductions. So let's start with the basic definitions that we are using the randomized version of PRGs and PRFs. So it just looks like almost the same as the deterministic version, except that this PRG has some additional public coin. And this adversary gets to see this public coin. Okay, so this is a talk about indistinguishability. And for PRF, no, so this adversary can have Oracle access to the function and the public coin. So quite naturally, we can ask if we can obtain randomized PRGs and weak PRFs from LPN. Because in this case, this LPN looks very close to PRG or a weak PRF, except for the noise. So quite naturally, we can maybe try to eliminate noise, like we do around learning LWR from LWE. But in this case, maybe we can try to apply some deterministic function to every few samples. And then just output the vinyl result. And I hope that this could be a PRG or even a weak PRF. Because every single sample is just a single bit. So it makes no sense to apply a deterministic function to a bit. So we need to try maybe to try to apply this to a few samples. But this method may not work. There are some impossibilities without saying that such balanced function, deterministic balanced function, do not exist. Okay, so our approach would to use some secret source and to convert, I mean, we do some extractor, use some extractor to convert it into Bernoulli noise. So here's our proposal. We use secret source, like it's actually the seed. And we use extractor or sampler. And we convert this source into Bernoulli distributions. And then we, so this, all these X and E's, they follow the Bernoulli distribution independently and we just output X and plus E, X plus E. And this function could be probabilistic because we have the public calling A. So we can use them for free just to do the randomization. So this, that's the construction. So to reduce the job to design a noise extractor, here's the statement of the result. We assume that a low-noise European, then we would get a randomized PRG in constant depths and with very tight security, very tight security, and with a seed of sublinear entropy. But before I show how to construct the extractor, let's start with the simple question. How to sample a biased bit using uniform randomness, assuming that the noise rate is a negative power of two. In this case, mu is 2 to the power of minus i, then we can just output the end product of all these i bits. Then for low-noise rate, the channel entropy is very low. So this is the estimation of channel entropy for binary distribution. Then if we have lambda uniform random bits, using this method, we can convert it into like roughly lambda over log n, but no noisy bits. But this is far from optimal because in theory, we use uniform random lambda bits, we can convert it into roughly this many bits. So this method is worse by a polynomial factor, and it was observed that actually conditioned on the noise sample, there remains a lot of entropy, so maybe we can use universal hash functions to recycle the entropy from W. But here we use a more parallel approach, so here's the proposal. We add in a randomized layer, here all these edges that are pairwise independent hash functions, and they are randomized by the public coin. Then all these functions, they are followed by the end product, this function. And we show that using this extractor, we can extract almost all the entropy. So here's the extractor, it can be done in constant dApps because pairwise independent hash functions, they exist in constant dApps. Here's the other theorem. So the noise extracted is very close to binary distribution, even given the public coin, given above and below, with an error bounded by these two terms. Concretely, for low noise rate, we can set, here at this edge of binary is the amount of entropy we extracted. And this lambda is the amount of entropy we invested. So we can, using this method, we can extract a constant fraction of the entropy, and we get the PRG of polynomial strength, I'm sorry, and with sufficiently small errors, which is much less than the actual security of the LPN. And the proof, using these tools like Cauchy Swartz and the pairwise independence, which is very, looks very close to the crooked leftover hash lemma. And we use also the lemma by flattening channel entropy cells, like we take many independent samples of the same distribution, it would yield a new distribution, which is very close to a uniform distribution. So this term, the first term come from this part, and the second term is coming from this part, but I will omit the details. And actually, we can also use an alternative approach. We can use a sampler. A sampler, this time, the input, we use better randomness, we use perfect randomness. And we can, in this way, we can do it, we can do it deterministically and more efficiently, but still in constant depth, but in this case, we can save the XOR gates. The idea is we take the conjunction of many copies of random strings, and each random string has having weight exactly one. Then we can take this many copies, and then we take the conjunction of them to obtain a random string of roughly the same having weight as the binary distribution. That's called this distribution. I don't know how to pronounce this symbol, but that takes, the amount of randomness needed is two mil-q, this is the number of copies, and each copy needs like log-q random bits. So this gives the total amount of random bits needed to sample the distribution. And we show that this is actually symbolically optimal because the amount of entropy is roughly the same as the Shannon entropy of the same, of the binary distribution of the same length at the same noise rate. So this is the randomness efficient. And we show that if we use, if we sample X and E from this distribution, we will get the same end result. So we get the randomized PRG of sublinear seed length with comparable security to the underlying LPN of linear size secret. But here's the proof. We prove that on the standard computational LPN, it implies the computational version of this LPN using this noise distribution. And then we use sample preserving reduction to show that the computational version of this LPN implies its digital version. The first step is a bit complicated, but I will refer to the paper. So once we got a randomized PRG in almost constant gaps with polynomials junct, so we still need to construct a PRF in almost constant gaps. So it consists of two steps. First step, we construct a PRF of a small, I'm like super log in input size. This can be done by using an NRA DGM tree of some constant gaps. Because we have, in this case, we have a polynomial stretch. So we can do an NRA DGM instead of a binary DGM. Since we have N bits of output, we just pass them as N blocks. And each block consists of N bits. Then do this for adapts of subconstant. So we will get a small domain PRF. Then we just extend this small domain PRF to an arbitrary large domain PRF. Here we use the second step, we use the Levin's trick, generalized Levin's trick. So the idea is if you have a long input, you just hash it into a, you just hash the input into long input into super log N bits to fit into this small domain PRF. Then we do this independently for many copies, sufficiently many copies, and then XOR their output. So here all these edges are universal hash functions. Actually the original Levin's trick uses only one single copy. And they already get a large domain PRF, but it's not secret preserving. So the generalized trick, we use many copies. And here we use a long key, but this key can be expanded from using our polynomial string PRG. You can use a short key and expand it into a long key. So here is the generalized Levin's trick. So if we have a small domain random function, so for simplicity, I assume here all these functions are random functions instead of future random functions. And here are all the hash functions. And then we got the resulting function is very close, looks very close to a random function of the same input and output size for any adversary making, who's bounded by Q queries. So here the statement is information theoretic, so we consider all, consider any adversary. It's not a computation, it's not a computation bounded. And in the previous version, we were selling this theorem as well, but we were told that actually the essential idea was like almost 20 years ago, but in this case, we only provide the proof in the appendix, a new proof in the appendix. You can show that actually using this method, the security is observed for Q equals to n to the super constant, because here you have to make this term small, and to make this l to be sublinear term, to make this the whole term sufficiently small. So in the end, we get a PRF in super constant, and the time advantage security is observed. And this Q is super polynomial, but it has to be bounded by n to the power of some super constant. And this super constant is essentially the dApps of the circuit that implements the PRF. So this is the dependency. So to conclude, we get the polynomial strange PRG in constant dApps, and the PRFs in almost the constant dApps. And the T epsilon security is actually better than the underlying APN. So if we use in the function of the key size, we will get a security up to two to the power of lambda over log lambda against the best known attacks. And the annoying part is the dependency. This Q has to be no greater than two to the super constant, which is the dApps. And open problems, it remains unknown if we can construct PRFs in constant dApps. We can weaken the end result if we can get like a weak PRF in AC0 mode two, or maybe if we can use like more powerful components such as a threshold gate to obtain PRFs in TC0. So it remains open problems. And there are also other open problems, such as like to construct some more crazy objects from European such as CIHFs and FHEs. Okay. That's my talk. Thank you. Questions? The relation between TC0 and this AC0 with super constant depth. TC0, it can additionally use the threshold gate. No, but I mean, are they one of them as a subset of another or not? Because if TC0 with actual constant depth versus AC0 with super constant depth. I don't know any connections. Because I think Naor Rangel achieves TC0, I believe, no? No, no. Doesn't Naor Rangel achieve TC0 PRFs? Which one? Naor Rangel? Naor Rangel. From LPE, we don't know. No, not from LPE. Naor Rangel, yes. TC0, yes. So and the complexity classes are not, you don't know whether one is better than the other, right? I don't know. More questions? How does it compare to what we know how to do from LWE instead of LPN? Sorry? How does it compare to what we know from LWE if you were allowed to say? LWE, they have a construction like for certain range of parameters, we can get a PRF in even TC0 from LWE, but you know LWE use a very large modular size, so we cannot apply the same technique to LPN. More questions? I have a question. Okay. I want to ask you for an opinion. So do you think that weak PRFs in AC0 mode 2 exist? I don't know. I don't know. I don't know. Okay. The tendency, like, I don't know, like a 60%, yes, 40% no? But they have this ITCS paper which construct weak PRF in AC0 mode 2 from a very assumption which looks very close to LPN, but it's not a standard LPN. Okay. Thank you. There's five minutes break to switch rooms.