 Thank you, and thank you, Yval, for the introduction and for organizing this workshop. I think cryptography has really been leading in a lot of ways in theoretical computer science. A lot of the things like interactive proofs actually originated from cryptography. So maybe even this workshop should have been called behind cryptography, beyond cryptography. So I'm going to talk about optimal algorithms and assumption factories, and at some points I'll mention some works, some of which have been published, some of them have not yet been published by several people, including Tzvika Bakersky, Sam Hopkins, Ayush Jain, Ilan K, Pavesh Kotari. Oh, Hanko Moit, and Amit Sahai. Okay, so let me start with talking about something that I see as a puzzle. So in cryptography, we're kind of used to the fact that, you know, people have sat down in the 1970s and looked at a problem, like, you know, the RSA problem, and they tried to, you know, they tried to, there were obvious ways to solve it. And they couldn't, and it has been stood until this day. So in some sense, we have the intuition in crypto that if you think about particular hardness assumptions, if it doesn't get broken, it's probably because it's actually inherently computationally difficult. And in other fields, people have some kind of the opposite intuition. They think like most problems are actually easy, they are easy and less proven otherwise. So, you know, almost every problem they solve in machine learning is NP-hard, and they still solve them all the time, like, you know, optimizing over deep neural networks or anything like that is always NP-hard, and they still solve it every time. Also, if you look at economics, even though we have all these proofs that reaching equilibrium is computationally hard, they are somewhat unfazed by these proofs. And they, you know, one way, EverTardosh once told me that, they phrase it, there is milk on the shelf, so there is obvious proof that people managed to find equilibrium, that, you know, the stores and the customers managed to come together in equilibrium, even though we have all these theoretical proofs that this is supposed to be hard. So, that's a good point. You're right, there are shelves, that's proof that is not equilibrium. Well, Amazon doesn't have shelves anymore, maybe. So, in some sense, it seems like, if you think, in all these fields, people kind of think, if you think of hard versus easy problems, then they think that problems of one types are a small island in a sea of problems of the other types, but crypto and ML and econ disagree, what is the island and what is the sea, is the sea hard problems and there is a small island of easy problems or is the sea easy problems and there is a small island of hard problems. So, this is somewhat of a puzzle, should we think of a problem as easy and less proven otherwise or as harder and less proven otherwise, and maybe in some sense there is not as big a contradiction, because if you think about this, first of all, in crypto, we have these kind of ideas of what is the best algorithm to apply for a certain class of problems, so we have these, you know, generic group models, random oracle models, etc. So, where we try to capture what's the certain types of algorithms that could be applicable to attack a certain problem, and if you look at machine learning, it's also not as if they believe problems are easy, because whenever they're faced with a new problem, they would come up with a brilliant new and unexpected algorithm, they always use the same algorithm. So, they also have the intuition that for the class of problems that they are interested in, if they are solvable, they usually maybe are solvable, but they are solvable, but this kind of algorithm, and in economics also, what people do is not, you know, a market doesn't run, I don't know, stress and fast matrix multiplication or something like that. It runs basically converges to an equilibrium by some kind of best response dynamics where people modify their prices based on supply and demand. So, in some sense, all of these fields, the intuitions agree with something that is a little bit different than what we learn in algorithms one on one. So, if you learn an intro to algorithm class, then probably you've learned it from a very thick book, and the thick book kind of describes it as if there are all these problems, and for every different problem, you come up with a brilliant different algorithm. But maybe in real life, it's more the case that, you know, all sorts of problems in a certain domain are solved by the same algorithm, and if that algorithm doesn't succeed, then no algorithm will succeed. So, that's kind of the question I want to talk about, and the question is, in some sense, should we expect to have an optimal algorithm for a wide range of problems, not all problems, and what kind of an algorithm could be optimal, and what can we use such things for cryptography. And let me say that I'm going to focus, in some sense, I'm going to do the equivalence of searching for your keys under the streetlight. In the sense that I'm going to focus on the easier class of computational problems for which we could maybe more reasonably hope to understand this kind of optimal algorithm. I'm going to focus on combinatorial types of problems, which in cryptoland might correspond to private key things like pseudo-random generators, and in algorithmic optimization we'll talk about satisfiability, graph problems. In some sense, maybe the things that are most interesting to us is to understand the difficulty of problems like, you know, lattices, factoring, where they're more algebraic nature, they're more public key, and they're also where quantum speed-ups seem to become possible, but maybe if we first understand, get a good picture of the combinatorial domain, we can move to the algebraic domain. And this is not very well defined, and I'm not going to also, it's one of the, I think the main questions in this research agenda is how to define what is combinatorial, and I don't know how to do it yet, but I'll give you some kind of buzzwords or intuition. So maybe prototypical combinatorial problem is satisfiability, and maybe the prototypical algebraic problem is integer factoring. And you can think of it as, you know, satisfiability, you want to maximize some objective in integer factoring, you want to kind of put together like a very rigid puzzle. In a sense, if you think about what is combinatorial versus algebraic, we think of, you know, maybe combinatorics, we're talking about maximizing number of constraints satisfied, it's more like inequalities versus equalities. In algebraic things we have more types of interesting cancellations, we're talking about exact, and it's more about rigid structure, like less about noise, but again these are buzzwords or intuitions, and for now let's see, let's use the kind of, I know it when I see it kind of, and focus on things like satisfiability, very simple pseudo-random generators for these lectures, and certain graph partition problems that somehow seems like they are more combinatorial in nature. Okay, and the algorithm I'm going to talk about as a candidate for being an optimal algorithm is the sum of squares algorithm, it was proposed by several researchers, including Shaw, Perillo, and LaServe, and this is kind of a meta algorithm, it's meta in two ways. First of all, it's not an algorithm for a single problem, you can kind of apply it to really a wide range of problems in some sense, any problem in NP or something, and it also has a tunable running time parameter, so you can also tune the algorithm for both the problem and for how much of a budget do you want to give it to run. So you can ask the question of how fast it runs on a particular instance or particular problem, and it's kind of a common generalization of linear programming, semi-definite programming, spectral algorithms, and also encapsulate some kind of many combinatorial types of algorithms like local search and greedy. And indeed in many cases you can show that it captures the state of the art in the sense that whatever algorithm gives the best known performance for a certain problem, sum of squares will also embed this algorithm, it's not necessarily the way you would want to run it because it will involve like solving a huge semi-definite program which is polynomial time but not really efficient but it does embed it. And in fact in many cases it is the state of the art in the sense that the best known algorithm we know was obtained for this algorithm. And the reason I'm interested in this algorithm is that on one hand it's powerful enough that you can make the claim that it's plausibly optimal in some range of parameters and in fact I'm not going to focus on worst case setting in this talk but for worst case setting there is formal conjectures, Unigame's conjecture that says if it's true then this algorithm is optimal for constraint satisfaction problems and so on the one hand it's powerful enough to be plausibly optimal unlike say things like restricted models like AC0 or something like that, or you know, depth to bounded funding, like these kind of models which we sometimes prove low bounds for but they are not very restricted but it is still weak enough so we can vigorously analyze and show that it fails on a certain, so it's not so powerful that basically proving that it fails sometimes is as hard as proving that P is different from NP. So it seems to kind of strike this balance which makes it quite interesting. So not going to go over all this list of like these are specific cases where recent work has shown that the summer squares algorithm gives better results than was known before for a variety of problems so a lot of problems come from machine learning but some of them also from quantum information theory or you know combinatorial optimization et cetera and this is just a partial list of more recent work. And this talk I'm going to focus on the relation between summer squares and simple pseudo random generators and I'll start by a brief overview of summer squares and then I'll talk about how it applies to pseudo random generators but I do think that this optimal algorithm paradigm could be relevant outside of pseudo random generators and specifically I'll talk about some attacks on pseudo random generator candidates that were recently proposed for obtaining indistinguishably opposcators and some evidence for the security of other pseudo random generators which at the moment seem to be just beyond what we need for opposcators but are still very interesting in their own right. So these are the two parts of the talk and I'm going to start with part one. Okay, so I'll talk about what is this summer squares algorithm very briefly and then how does it apply to pseudo random generators. So the summer squares algorithm, this is by the way a spectrahedron which is kind of the object that you optimize over in this algorithm. So the summer squares proof system is basically the algorithm is obtained from a proof system and what is the proof system? You're trying to prove a statement that of the following form you want to say that for every x that satisfies a certain polynomial inequalities it also satisfies another inequality. So that's the kind of statements you're trying to prove and this can be very general because you can write like for example you can express something like x is an independent set in a graph. As a set of polynomial inequalities you know you are the equality xi squared equals xi that says that every entry of x is 0, 0, 1 and then you can add like inequalities that say it's an independent set. So for example if you wanted to prove a statement like every independent set in a graph as size at most k vertices you can easily write it as a set of statement of this form. So these are polynomials and various polynomials of the reals and the axioms will use a very simple one axiom is that a square is always non-negative. And other axiom is that if you know that for every x that satisfies your condition p of x is non-negative and q of x is non-negative then p times q and p plus q are also non-negative. And this simple proof system turns out to be complete. Every two inequality, every two statement of this form can be proven using this proof system and this is a result like a fundamental result in real algebraic geometry known as the positive standard sets. And it arose, it's like a generalization of the solution of Hilbert 17 problem. It said that every non-negative polynomial can be expressed as a sum of squares of rational functions. And now you can also keep track of what is the maximum degree in the proof that you used and the way we'll do it is we will keep track of syntactic degree which means we don't allow cancellations. If p is degree d, q is degree d prime, we say that p is degree d plus d prime and p plus q is the maximum of the degrees. Even if for some reason there was cancellation we kind of just keep on, it's syntactic. And then it turns out and this is this sum of squares algorithm that you can find degree k proofs in time which is basically n to the k. So if k is constant, this is polynomial time. So basically you have this sum of squares proof system, it can prove anything and it has a parameter called the degree. The smaller the degree, the better the running time and actually finding proofs. And one example of something you can prove with this is like say the Cauchy-Schwarz inequality. So the Cauchy-Schwarz inequality, we can think of it as the following theorem. The dot product of p and q is smaller than the product of the norms, let's square both of them. So the dot product squared is smaller than the product of the norm squared. And you can just write it, you can just open it up and see that this is a sum of squares. So you can express it as, you know, you just write the left-hand thing is obviously a sum of squares, so it's obviously non-negative and then you open it up and you see that this is really equal to two times the polynomial, two times the thing that's above. So that's a proof it's non-negative, you just express it as a sum of squares. Okay, and there are like many other results showing that lots of interesting theorems have low-degree sum of squares proofs and also some results showing that some interesting statements require high-degree sum of squares proofs. And the one thing that I want to make sure to emphasize is that low-degree sum of squares proofs is not the same as simplicity for humans. So you can have things like these results on a majority-stablest, which are papers that were in the annals of mathematics and still can be proven using this sum of squares proof system with only a constant degree. And on the other hand, you can have the result that a random three-sat formula is not satisfiable, which is like five lines using the probabilistic method and that requires a very large degree for sum of squares. So these two, the notion of having small sum of squares degree and the notion of being simple proof in what we think of it as intuitively is not the same thing. But the important thing is that lots of interesting statements can be proven with low-degree sum of squares proofs with the probabilistic method being an important exception. So how does this sum of squares algorithm work? So I'll just give you a cartoon of it. So this slide may or may not make a lot of sense, but if it doesn't, then you can just ignore it and say, okay, there is this algorithm, it just works. So the idea is the following. Think of a statement like the simplest things. Think of a statement with trying to prove that there is no X that mutually vanishes on all these polynomials. So it turns out that the sum of squares proof of that you can collapse all the derivation steps and say at the end it has the following form. It says it has the form minus one equals P plus S, where P is the sum of Pi QI for some arbitrary QI and S is the sum of squares. Now if you think about it, P, if there was an X, where all of these Pi's would vanish on, then P of X would have to be zero. S of X would have to be non-negative because it's a sum of squares and then you would get a contradiction that minus one equals a non-negative number. So this is a proof that there is no X that if you assume that there is an X that P1 of X and P2 of X all are zero, then you get a contradiction. So this is how this sum of squares proof looks like. And if you think about it, if you look at two convex combination, if you look at some P and S that satisfy this equality star, and then you look at some P prime and S prime that satisfy this equality star and you take convex combination of them, it also satisfies this equality star. So that means this is a convex subset. And it turns out that this convex subset is like a nice convex subset that can be expressed basically as a spectrahedron. So you can find, so you can determine whether this convex set is empty or not using semi-definite programming. So that's really kind of very much a cartoon, but that's basically how the algorithm looks like. So it's semi-definite programming to search for these proofs and the number of variables you need in this program is n to the k where k is the degree. So now if I confused you and I think there is a good chance I did, let's just kind of step back and say what is the take-home message. So this sum of squares proof system is on one hand powerful, you can prove many theorems using low-degree sum of squares. On the other hand, it's what's known as automatizable. So you have an algorithm to find proofs like that when they exist. Oops. And the third thing it's useful because it gives you like this algorithmic paradigm that you can use for many problems. And I haven't demonstrated it to you and said write a list of all sorts of problems. And we can also wonder whether it's optimal in the sense that because it does seem to capture the state of art for lots of problems often. So you can wonder if maybe that's when that fails nothing succeeds and that's basically would be the sum of squares optimality conjecture. And if you assume that this is optimal, then you basically get an assumption factory in the following sense that for every concrete problem, if you show a sum of squares lower bound for it and you assume that sum of squares is optimal, then you immediately get a conjecture or the assumption that this problem is hard. So if you believe and are able to state a conjecture that this sum of squares is optimal in a certain domain, you basically get a factory that gives you many concrete hardness assumptions for very concrete problems. So let me now move to part two and talk about the relation to pseudo-random generators. So talk about simple pseudo-random generators. So simple pseudo-random generators, one way to think about it is every output pseudo-random generator takes n bits, expands it into n bits, so it takes a seed of size n and outputs a longer output of size m where m is bigger than n. And you can define what it means simple. So the one notion of simple is that every output is just, you know, say, depends on just five input bits. Another more refined notion was recently put forward is that every output depends on, say, just a few blocks of the input bits or think of the input, the seed as coming with a larger alphabet. And you can also think of a degree of it as a polynomial. And these have been widely studied and let me just give you kind of like the flavor of some results. So people have shown that if these simple pseudo-random generators exist, you get all sorts of cool applications. For example, a constant overhead multi-party secure computation, or you can reduce, there's a work of Rachel Lynn of reducing the requirement to indistinguishability obfuscation to just constant degree maps. And if you assumed even more simple pseudo-random generators, so if you assumed super simple pseudo-random generators, then you can get magic. And by magic, I mean you can get these obfuscation from standard assumptions, just by linear assumptions. Super simple, we'll see it, but roughly speaking, degree three is simple, degree two is super simple. And one of the things is like you understand what will happen, and we'll focus on the degree case, there have been also works on locality, etc. So think of every output is just a degree d polynomial, if you think of it as a polynomial over the integers. And there was this work of Lynn and Tessara that shows that if you have this notion known as block locality, which is kind of a weaker, it's kind of a stricter notion just being just simply degree two, it's some of the degree two of a particular flavor, then you can get indistinguishability obfuscation, this holy grail of crypto under basically standard assumptions. But unfortunately, in work with Bakersky, Komodosky, Kotari, and also there was a simultaneous work of Lombardi and V, we showed that summer squares breaks block locality to pseudo-random generators, and very recently there was a work that's showing that you can relax the assumption in two ways. First of all, only need degree two, and certainly you kind of need a weaker notion than being like a full-fledged pseudo-random generators, and you can still get the same conclusion, and very recently in work that's not yet put online, we showed that summer square strongly breaks these degree two pseudo-random generators, so it also breaks the kind of assumption that we're putting in these recent works. And this is what I'll talk a little bit now, I'll mention now a little bit more, although again not with so many details. So how do you break this degree two pseudo-random generators? So the idea is you think of the following, think of these inputs as, you know, you're getting, think of now your goal is to break it completely so you really want to recover the seed, not just maybe predict or distinguish from uniform. So you're getting this G1 of X till GM of X, you're getting these outputs of degree two polynomials, you know the polynomials but you don't know X, and your goal is to recover X. So the way we look at it is we rephrase this question and we think of it as the following. We think of this unknown X as being a rank one matrix, and then this G1 till GM instead of degree two polynomials, now we can think of them as linear measurements on this matrix. So we're getting these M linear measurements, M linear functions, the output of M linear functions on this rank one matrix X, and our goal is to recover X. So this is just a rephrasing of this problem. But if we rephrase it in this way, this becomes like a problem that people actually have studied. This is known as the low-run recovery problem and has been studied to great depth and etc. And it known that to be solvable using some semi-definite programming, which is a special case of some squares, if this GI is satisfied condition known as matrix restricted isometry property. And we showed that basically all the candidates that we recently put forward do satisfy this property. And that's about as much detail I'm going to show. It turns out that you can actually, this SDP in this case is simple enough, you can actually run it in, you know, it's about five lines of Julia code. And you can show that basically in this case we ran it like, so we got like when M equals N, we got already 75% agreement with the recovered something that was 75% close to the original seed. And when M is like 2N, we basically recovered completely the original seed. Yes. So the particular candidates do satisfy it. I believe this condition is not needed for PRGs, but then it becomes a little bit more delicate because you can probably make sure that you don't recover X by for example just ignoring information about say half of the bits of X and then you will not satisfy the RIP condition, but it also doesn't seem useful in that sense. But I think basically any candidate where the polynomials are drawn from some distribution and each coordinate is drawn IID, then there are fairly weak conditions on the distribution and that will ensure it satisfies RIP if M is say N times polylogin. So let me say why do people study this matrix recovery and that also will bring us to the distinction between simple pseudo-random generators and super simple pseudo-random generators. So why did people study matrix recovery? They didn't do it to break through the random generators. One of the motivations was something like the Netflix problem. So suppose you are like a streaming service and you have some reviews of user for movies and now you want to predict what the user will like about movies that they didn't see it. So another way to say it is that you have this matrix of users versus movies and you have some observations about it, you have some measurements about it which is the ratings that you have and you want to complete it, you want to get the entries that you don't have. And how do you do it? The idea is that you kind of think of it the following things. You think of each user, they have a certain things they like and certain things they dislike. So maybe they like kids movies, they dislike action, maybe they like horror or something like that and each movie has a certain features. And basically the rating of a user is kind of like up to some noise, it's like a dot product of the features of the users and the vectors of the features of the movie. And this basically means that this matrix X is low rank. So then it becomes really the low rank recovery problem. This is with measurements that are particularly like looking at just coordinates of the matrix but you can generalize it to any linear measurements and then this has been very widely studied and you know that if say R is constant, in our case R is one, then you only need basically O of N, O tilde of N observations which corresponds to, you cannot get a pseudo random generator that stretches too much. So that's basically why these results end up breaking the grade to pseudo random generators. Yes, integers but they are embedded in the wheels. So it's kind of all, as opposed to, yes, finite fields, yes. And now you can ask also what does the degree 3 say pseudo random generators correspond and that will correspond to problem like tensor recovery which is also well motivated. So for example sometimes maybe the particular movie you want to watch somehow depends on the time of day. In my household, for example, if you look at the movies we tend to watch at Netflix at 11 AM on Saturday when the parents are maybe trying to sleep and the kids are awake, it might be very different than the movies we watch on 11 PM when the kids are, you know. My kids just watch deep educational documentaries while I just waited to go to sleep so I can watch Barney. So sometimes maybe you have this kind of tensor where it's like a three dimensional thing you want to, and it turns out that there is a difference or it seems to be a difference. So some squares does recover but it needs more measurements. In particular, if they think of D as equals to 3, like it's a 3 tensor, then you need n to the 1.5, think of the rank as constant, you need n to the 1.5 measurements as opposed to only essentially n measurements. So we did it approximately, we did it exactly, we know how to recover with n to the 1.5 measurements and it also follows from some previous low bounds that some squares actually does require exponential time if you have less than n to the 1.5 measurements. So this leads to this conjecture that may be the maximum output length for a pseudo-random generator is, if the pseudo-random generator is of degree D, then it's roughly n to the D over 2. So you know, degree 2 basically it's roughly n, so it doesn't really have like significant stretch, degree 3 might have stretch n to n to the 1.5, degree 4 n to the n squared. And if you believe that some squares is optimal, then basically this is this kind of conjecture. So basically it says, you know, if you think of super linear stretch, then you can break, you know, these PRGs when they are degree at most too, and the known low bounds show that some squares at least will require exponential time where the degree is 3 or more. So you could maybe, so this is obviously insecure, but you could conjecture that degree 3 or more is secure. And that's basically how you could, if you believe in SOS optimality, you can transform an SOS low bound into, you know, cryptographic hardness assumption. And in particular people have been working very hard on trying to use that cryptographic hardness assumption as a basis for basic obfuscators on standard assumptions. And if you kind of, yes. So when you say some squares require a certain time, is it for particular candidates of PRGs? Yes, so you have to give a particular candidate, like say, in that case, like, but many candidates actually it requires exponential time. But in particular random things. Yeah, you have to be not silly, like in some sense for, not those, there are some kind of silly things you can do, which even in the grid 100 will not be a secure pseudoando generator. In particular, you can have a bias or something. So if you kind of step back, like we have a similar issue with both say machine learning and crypto, where we deal with average case hardness. And how much time do I have? Okay, so I'm very good. Okay, so yes. So if you can. So both machine learning and crypto, they deal with average case hardness, which basically machine learning, we try to solve it. So in both cases, we basically cannot really use NP hardness to get insight for it. And in some sense, it's very hard for us to even use reductions at all. There are very concrete case where Cynthia knows well, there are like worst case to average case, worst case to average case reductions. But in many of these cases, we don't know. And also in many problems, the type of NP hardness reductions, they modify the distribution. So they will not give you a hardness of a problem on a natural distribution. So NP hardness gives us very limited insight. On difficulty for both these fields. And maybe this is, again, like a case where both these fields agree in the sense that machine learning, people are not worried. Machine learning are not worried when a problem they're trying to solve is NP hard. And cryptographers are learned the hard way, the lesson that they should not be rest on their laurels and problems that they're trying to base their cryptosystem is on, is NP hard. So both fields kind of agree that NP hardness is not the right measurement of difficulty for... And in some sense, yes, so machine learning, you try to solve and you're happy if you succeed. In crypto, we are like in more difficult situation because you know, you try to solve it, but if you can't, you just hope that no one can do it. And then in some sense, if you have these optimality conjectures, we can maybe make rather than basically the idea of like the cycle of, you know, build, pray, break, we could maybe get a more systemic way to analyze this problem and this kind of assumption factory. So to do that, we really need some kind of formal optimality conjecture. So let me talk about the sum of squares optimality conjecture. So the current status of the sum of squares optimality conjecture is the following. This is my conjecture. So I haven't yet been able to formalize it, but I conjecture that it is possible. So I conjecture that there is the sum of squares. So maybe this is the meta, the sum of squares optimality conjecture. Maybe there is even a meta-meta where there is, I conjecture that there is some optimal algorithm. I don't know if it's sum of squares and I conjecture that for this algorithm there is also conjecture. So the main bottleneck is in some sense, what does it mean to be combinatorial? And what's the clean general way that doesn't seem like, you know, you're really tailoring it to the five cases. And in some sense again, I know it when I see the definition, it basically talks about, say, constraint satisfaction problems, certain graph partition problems and more generally, the intuition is basically if some squares is plausibly optimal when you're talking really about approximating solutions in cases where it's not as if you, maybe you don't get your instance precisely, you get it up to some noise, so like, you know, linear equations but noisy ones, so you're really talking about approximation versus exact, and you're trying to optimize over kind of nice low-degree varieties. So what are nice low-degree varieties? One is like the Boolean cube, the unit sphere, low-rank matrices or low-rank tensors. These are the kind of things that are like, there might be technical ways to try to phrase nice in terms of Robner bases or something like that, but basically it's, so you're trying to basically have some kind of, so it seems like these two concepts are important there, that you're trying to optimize over relatively nice space, but nice can be the Boolean cube, which is not easy. It's nice but it's not easy, and you're not talking about exact, which is kind of crucial to avoid the issues like the fact that Gaussian elimination unfortunately exists, which of course, I think every cryptographer has struggled at some point with this annoying fact that Gaussian elimination exists. So the formal statement is still a work in progress, but I do think that there should be one, and I think it would be very interesting for cryptography. In some sense, one way to think about this is the following. So typically if you construct, you have a crypto paper, you put it on the e-print, you try to make your assumption as weak as possible. That's sometimes why you have like, if you know some caveat that you only need in your assumption, then you add it, that's why assumptions in crypto maybe sometimes have like five adjectives like linear, BVL, and that and that, because you kind of try to, you know, assume the least. And in some sense, the optimality conjecture is sort of the opposite philosophy. Instead of assuming the least, you say, speak loudly and carry a big bull's eye on your back in the sense that make a very bold assumption, which is basically an assumption factory, which makes it easier to break in some sense, but also might mean that if it survives, then you're more confident in it, because it's not very tailored, you know, you didn't invent it just for this particular, it's as far as you can from the assumption that the construction is secure, because our conjecture is that the construction is secure. And then, so I do think this is very useful for cryptographers to try to think about these kind of meta assumptions, and in some sense they had, like in the generic group model, et cetera, although in cryptography sometimes you have this annoying thing that whenever you try to formalize these kind of generic models assumptions, you get something that's obviously false. And my hope is that there is, like a concrete conjecture you can make, because it's really not a generic or black box type of assumption. Because the algorithm has full access to, say when it's all set, it doesn't treat the formula as a black box, it has full access to it, so it's not an ideal model type of assumption, and I think it has a chance of being one that you can formally specify, yes? Do you really want to say try SOS, try GAPS elimination, if neither works it's secure? Well, I would not, I mean, actually like, I think that would be like too delicate, but in some sense there have been kind of works where you basically, you can use, there is the Fager-Kim-Offek interesting thing, which is not an algorithm as much as a certificate, but it uses, it combines actually SOS and Gaussian elimination. I hope that basically the idea would be that if you add noise to your problem, then algebraic algorithms like Gaussian elimination, Strassen, et cetera, they go out of the picture. They basically, you moved your problem into the streetlight, where the Gaussian elimination monster doesn't work. So once you add the noise to your problem, I hope that the only candidates can be these kind of more continuous optimization algorithms and not Gaussian elimination. I guess it depends. The streetlight there is not necessarily the easy part, but rather the part maybe which is more analyzable. Yes? Yes, so that's the problem. Basically Gaussian elimination is like the, I mean, I guess it depends, like defining Gaussian elimination is a little bit hard, because for example you could basically say you could possibly modify your input. So, you know, for example, Gaussian elimination over, you know, what group or something like that, you could maybe present your input as something that were, it's not obvious, it's not like X or something like that. It's not obvious that it's Gaussian elimination, but it is after, you know, you interpret like your input in the right way, then it turns out that it's embeddable in say some a billion group or something like that. And in particular, for some pseudo random generators, for example, it seems like you can have these, say, non-Abelian linear equation problems, which Gaussian elimination doesn't apply, but for breaking the pseudo random generator, it only matters that there is an Abelian group hiding in there, so somehow if the group is solvable, you could still break it. So sometimes you might need Gaussian elimination, but you need to massage it. And the Fege Kim-Offek is a very nice thing, which I cannot explain the details of here, but it's again, like it's a very nice thing where it kind of combines Gaussian elimination, but not just by running it as a black box. It's more like finding some subsets of your formulas where you can run Gaussian elimination on these subsets. Yes. Yes. So that's actually like a great question, which maybe leads me to the next slide. I think actually that's the definition of a great question that leads you to the next slide. So in any case, I just want to say that even if you don't think, if you think I'm kind of full of shit and some squares is not optimal, it still makes sense if you're like proposing assumption on something where some of the squares could be applicable to use it as a sanity check. So that's definitely the case. And let me say, and jump maybe exactly to Vinod's case, like there is this issue of some squares, some squares kind of really make sense for problems that are presented as low-degree polynomials over the wheels. And if you look at certain cryptographic problems maybe combinatorial like, say, block ciphers where they have more depth to them or as opposed to just one-round function or parity with noise, the problem is somehow not naturally a low-degree problem. So it seems like some squares is not applicable even though it does seem combinatorial in nature and understanding exactly, and understanding whether there is a generalization of SOS or there is a way to view things that still place this in the combinatorial domain. It's also very interesting for both parity with noise and the learning with error problem. It seems that these are problems that change their nature when you change a quantitative parameter. So for example, if you look at learning with errors, so we know there is a certain regime of the noise magnitude where the problem is NP-hard. Then you reduce the noise to a certain level, it's no longer NP-hard, it's in NP-intersec-current-P and then it becomes useful for crypto and maybe it feels a little bit algebraic. And then you reduce it, so you reduce the noise even more and at some point the problem just becomes in polynomial time. So it seems sometimes like the, you can have a phase transition where the nature of a problem changes where you change a parameter, a quantitative parameter and that's again something we don't completely understand. But it seems like more noise makes it more combinatorial, super less noise could make it either easy or maybe algebraic. And I should say that after we have applied this, which was joint work with actually Ayush and Amit who are part of this IO paper, now they have a new PRG conjecture, which is somewhat between Degree 2 and Degree 3. So it's somewhat, and it has this actually interesting mix of algebraic and combinatorial because it's kind of like Degree 3 but they reveal something under LWE. So this is something we have not yet analyzed and we don't yet, at least I don't have yet good intuition for it. So this is one very concrete open question to understand this conjecture which they have. And we really want to understand where these kind of sum of squares could be applicable and how we could apply it to other crypto settings where we polynomial seem larger, a higher degree at least naturally. And there is this work on trying to understand the difficulty of sum of squares and getting lower bounds in the natural thing to apply for lower bounds is when it's unsatisfiable but we want to understand it's satisfiable and there are some interesting new work that tries to connect it to statistical physics predictions of phase transitions between easy and hard domains and you can also ask whoever, to understand relation between that and SGD etc. and lots of open questions there and lots of things we don't understand. And I think that's it.