 Hello everyone, I'm Lisa, I'm a researcher in the CWI Cortology Group and I'm very happy to present the work, Glow Complexity Vipsuter Random Functions in AC012. This is joint work with my wonderful co-authors, Iled Baal, Jeffo, Kutun, Nifkeboa, Iwai, Shain, Peter Schott. So let's start at the beginning. Recall that a pseudo-random function is a keyed function that looks like a truly random function. The difference of a picture-rand function is that we require security only to hold for random inputs instead of chosen inputs. And the security notion we consider here is sub-exponential security. So security against circuits of a sub-exponential size that have one over-sub-exponential advantage. And why is that, why is that the security notion we aim for? Well, for quasi-polynomial security you get a completely different landscape, as you will also see in the later slide, and exponential security is often just too hard. Fertile sub-exponential security is also what you, kind of what you get from standard assumptions such as discrete log, factoring, learning with errors. Vipsuter Random Functions have many applications, for example, towards secure communication where the parties after one-time setup can use the Vipsuter Random Function evaluated on random inputs as a one-time pan. Vipsuter Random Functions can also be used as identification where a party can show knowledge of the shared key by replying to random challenges. The question we reconsider in this work is, what is the lowest complexity class we can hope to construct Vipsuter Random Functions in? This question is at the intersection of many interesting areas. First, if you can have a big PRF in the low complexity class, it typically gives you efficient metric key primitives, for example, highly parallelizable stream ciphers and simple message authentication codes. Second, it has implications to learning theory, which asks the questions of which functions can be learned efficiently through plug box success. And if a complexity class contains a weak PRF, then this class cannot be learned under the uniform distribution, so it gives limitations in learning theory. Third, the existence of low complexity symmetric objects has been related to the existence of high-end cryptography. For example, constant locality, BRGs gives constant overhead secure computation. The low complexity classes we focus on in this work are variants of AC0. So AC0 is the complexity class of circuits with end and or gates of polynomial fan in and constant depth, as you can see here in the slide. And we will consider also AC0 on top of parities, where additionally to the end and or gates, there is a layer of X or gates allowed at the bottom. And more generally, we will consider AC0 mod 2, where the X or gates are just allowed at arbitrary layers. And finally, that's not the focus of our work, but it will also come up. So I'll mention here is AC0, where their arbitrary mod gates, so mod 2, mod 3, and so on, allowed at arbitrary layers. Here you can see an overview of our previous work. If you want to look at it more thoroughly, you can pause or go to our paper. But what I want to stress is the following. First, there are basically two approaches to constructing weak PRFs. One is to build on standard assumptions like factoring, decision of Diffie Hillman, learning of errors. And the other one is to put forward new assumptions for which known attacks can be ruled out or plausibly do not apply. So heuristic here basically means not used in previous work, but might become standard in the future. And in this work, we followed the heuristic approach. And the second thing I want to mention is, if you look at the orange parts here, we know that in AC0, there cannot exist weak PRFs with better than quasi polynomial security by the famous result of Lineal Mansour and NISA. And on the other hand, we know that above AC0 mod 2, there exists even strong PRFs, so we do have kind of constructions of strong PRFs. So in this work, we focus on the area in between where so far there's only one candidate weak PRF with more than quasi polynomial security as you can see here. So let's take a closer look at the area in between, starting with AC0 mod 2. So as you already saw on the last slide, if you're fast enough, in the recent work we brought forward the candidate weak PRF computed by an XNF formula, which is basically DNF, where instead of disjunctions, you have XORs. And what we show in this work is that you can go even lower to sparse F2 polynomials, so similar to XNF formula, but without negation in the inputs. Going to AC0 on top of parities, there was a candidate weak PRF brought forward by a caveat all in 2014. But unfortunately, it was shown later that their candidate can be broken in quasi polynomial time by so-called algebraic or rational degree attack that I will explain in a bit more detail later. In this work, we show how we can fix this candidate and bring forward a new candidate weak PRF in AC0 on top of parities, the only one currently known that plausibly has sub-exponential security. So let's start with the candidate weak PRF computed by sparse F2 polynomials. The starting point is the previous candidate in AC0 mode 2. As you can see here, the candidate is an XOR of Ns, where the N terms are increasingly biased towards zero. And the intuition behind this construction, very roughly, is that the more samples one sees, the more of these terms will kick in, meaning that given very few samples, these are indistinguishable from random because of the low degree, the low order N terms. And the more samples are given out, the more noise by the higher degree terms will be added. And this candidate, if you write it in a different way, can also be viewed as learning parity with variable density noise, where if you see it like this, you can think of the higher degree terms corresponding to the sparse noise. And the outer XOR is important to ensure that linear attacks fail. So linear attacks are an attack framework that captures large classes of attacks that apply to learning parity with noise like assumptions, and such as Gaussian elimination, statistical decoding, information side decoding and BKW. And as already mentioned, the X and F formula is basically sparse multivariate F2 polynomial in inputs and in negation. So what we wanted to do in this work is getting rid of the negation part. So how can we do this? The idea is quite simply, namely, instead of letting the key decide which variable to negate or not negate, is to let the key decide which variable to choose from a set of possible variables. And of course, the simplest attempt, the first attempt that you would try out, is just take two variables. So add another copy of variables to X, and then for each term decide between Xijk and Xijk prime. Unfortunately, it does not work. There's a simple attack with this, because the problem is that the problem is that both Xijk and Xijk prime are zero with too large probability with probability one fourth. And if that happens, the whole end term will be cancelled out. And this is also public when you know the input. And this is what happens for random input, which you get to see for BPRFs. And now the solution is also still quite straightforward. You can circumvent this by not choosing between two, but by choosing between sufficiently many than this, that all of them are zero will happen only with very, very small probability. And this attack does no longer apply. So then instead of Xijk, X or the key, what we will do is having the key pick which of the terms X, which of the variables Xijk to pick. And we can indeed show that this candidate, as the candidate before provably resists linear attacks up to two to the B samples, if now W and B are chosen large enough. And the analysis for this is similar to the previous analysis, but more involved due to the structure. So what you can see here is that the extra layers in the top. So the next question we ask in this work is, what if you allow the X or layer only in the bottom? So what if you go to AC0 on top of parities? As I mentioned before, we're not the first who consider this question. So let's take a look at a candidate construction from 2014. So on the slide, you see the BPRFs here with keys SNK. And the design paradigm is as follows. G is chosen such that it is a function in AC0 that is not to bias. So it has constant bias and it's also called the drives function. And then K is used to hide the heavy Fourier coefficients that we know G has because of the result of linear Mansour and Nissan. And finally, the bias is removed to get from a constant bias BPRF to random BPRF by adding a parity of X with a fresh part of the secret key S. This way of achieving a BPRF can also be viewed as learning parity with simple deterministic noise, where here the noise function is determined by the function, the public function G and the secret key K. This candidate can be shown to resist linear attacks based on a simple combinatorial conjecture. The problem is that it can be broken in quasi-polynomial time by so-called algebraic or rational degree attack, as was shown by Bogdanov and Rosen in 2017. In order to see how we overcome this issue, let's take a look at so-called algebraic or rational degree attacks. The idea behind this attack is that if one can find a low-degree quasi-polynomial H such that G times H equals zero or G plus one times H equals zero, then given input-output pairs, one can solve for H given n to the degree of H samples. So for H with logarithmic degree, this gives an attacking quasi-polynomial time. We call that the AC series circuit in the construction of Akavia et al. is of the following form. Now Bogdanov and Rosen observed that it always holds G plus one times GI equals zero for any of the inner n-terms GI. Therefore, the candidate VKF of Akavia has rational degree logarithmic in the security parameter and can therefore be broken by a rational degree attack in quasi-polynomial time. Furthermore, this is inherent for DIN apps because either at least one of the n's has low fan-in resulting in low rational degree or the function is very biased towards zero and the corresponding candidate could therefore be broken by a linear attack. So how to overcome this? The idea is to consider the two cases of the rational degree attack separately. So call the minimal degree of H such that G times H equals zero, the primal rational degree, and call the minimal degree of H prime such that G plus one times H prime is zero, the dual rational degree of T. And with this, if we reconsider the function from the slide before, a bit more general, just a disjunction of functions, then with the same observation, it's a bit less straightforward than what you saw on the slide before, but it's still very easy to see that the disjunctions in some sense don't increase the dual rational degree. They don't help increasing the rational degree by increasing the dual rational degree because the dual rational degree is just the minimum of the dual rational degrees of all its terms. But on the other hand, what we show in this work, what we can provably show is that it does something to the primal rational degree. Namely what it does, if all the terms are independent, so operate on disjoint set of variables, then the primal rational degree is the sum of all the primal rational degrees of the underlying terms. And so this alone doesn't give much because the rational degree is the minimum of the primal rational degree and the dual rational degree. But now if we look at an N term, it's very easy to see that this just behaves dual to the or. And so there we have that the primal rational degree is not increased, but the dual rational degree is now the sum of the dual rational degrees of all the G line. So by the previous slide, we noted the or increases the primal rational degree and the end increases the dual rational degree. So we can provably increase the rational degree, which is the minimum of the primal and the dual rational degree, just by adding sufficiently many alternating layers. It turns out that we only have to add one more layer compared to the function of a caveat at all. And again, we have to choose the fan in such that the function is not too biased. And this candidate now has high rational degree by the considerations before because we have the end and the or of sufficiently high fan in, so the two outer layers. And further, it again plausibly resists linear attacks based on a combinatorial conjecture. So that was our first candidate. Note here that the parities are secret, whereas the AC series circuit is public. So the next question we considered in the work is, in some sense, can you do the other way around? Can you get a weak pair F in AC0 on top of public parities? So why do we care about this? The motivation of having a weak pair F in AC0 on top of public parities is that it would give a weak pair F that is pseudo random on random code words. This would directly imply a stateless symmetric encryption scheme with decryption circuit fully in AC0 just by plugging in the big pair F and encryption scheme I presented earlier. This is something that we do not currently know to exist. In their work, a caveat all brought forward a conjecture regarding functions in AC0 on top of parities saying that every function in this class has a heavy Fourier coefficient, meaning of size one over quasi polynomial. We strengthened their conjecture by saying that this heavy Fourier coefficient stems from some low order Fourier coefficient by applying a transpose of the linear mapping describing the mode two layer. Similar to a caveat all for their original conjecture, we show that this is provably true if M is a random map independently of the AC0 circuit G and if G is a DNF or CNF independently of the map M. Our strengthening would imply that no weak pair F in AC0 on top of public parities with better than quasi polynomial security can exist. This conjecture together with our weak pair F candidate has also implications to linear IPPP which is the conjecture that the inner product mode two cannot be computed in AC0 on top of parities. Namely, if there exists a weak pair F on top of parities but there does not exist a weak pair F on top of public parities then we show that linear IPP must indeed be true. Finally, in our paper we asked how the existence of weak pair F in AC0 mode two relates to other assumptions. A caveat all already showed that weak pairs on top of parities imply learning parity with noise with high noise rate if indeed every function in AC0 on top of parities has a heavy 4E coefficient. We in some sense strengthened their style by showing that weak pairs in all of AC0 mode two imply learning parity with noise for a specific code and noise rate where our implication holds unconditionally. Further, we show that weak pair Fs that fall into the variable density learning parity with noise framework which includes our candidate weak pair of computed by sparse F2 polynomials imply public encryption under an additional conjecture. This conjecture basically says that if there exists some code that is hard to go with respect to some noise rate then almost all codes are. It is an interesting open question whether this result can be shown unconditionally and or extended to AC0 mode two more generally. To summarize, in this paper we brought forward new candidate weak pair Fs with sub-exponential security. Our first candidate weak pair F is computed by sparse F2 polynomials. It fits into the variable density learning parity with noise framework. It provably resists linear attacks and it also has provably high rational a degree so it's not susceptible to algebraic attacks. Our second result is a candidate weak pair of an AC0 on top of parities. This is the currently only standing candidate which can be conjectured to have full sub-exponential security. This candidate is within the learning parity with simple noise framework. It plausibly resists linear attacks and it has a provably high rational degree. With this I would like to end. Thank you very much for your attention and I hope you got interested and will look up more in the paper. Thank you.