 Hi, thank you for coming to the talk and thanks to the organizers for organizing. I hope you can hear me. I'm going to talk about the average case complexity of counting cliques in order to run your hypergraphs. And this is joint work with Matt Brennan and Guy Bresler, who's my advisor. So first I'm going to give you the setup of the problem, then I'm going to motivate the problem, and then I'm going to explain the results and give you a brief proof sketch, but we don't have that much time, so I won't be able to give you all the details. So the setup of the problem is the following. You're given a Nerdish-Renny hypergraph, like a random hypergraph, where each s subset of the vertices is in hyperedge in this hypergraph with independent probability p. It's the simplest model for random hypergraphs, and you're asked to count the number of k cliques in this hypergraph, where k is a small constant. So think about, for example, the case of graphs, where the rank of the hypergraph is 2. Then each possible edge is in this graph with probability p, and you might want to count the number of triangles, three cliques, or you might want to count the number of four vertices with all the edges between them, four cliques, etc. So this is the basic problem that we're going to consider. So the question that we're going to ask is, how does the optimal running time for counting the number of k cliques in an Nerdish-Renny hypergraph trade off with the number of vertices, with the density of the graph, and with the rank of the hypergraph? And in particular, how does it differ in this average case versus in the worst case? So now I'm going to motivate why we would ever consider this question of k-quick counting on Nerdish-Renny hypergraphs. Well, it turns out that clique problems on Nerdish-Renny graphs have been studied for a while, and maybe the most famous or well-known example of this is the planted clique problem. In the planted clique problem, you're given some Nerdish-Renny graph and k-quick planted in it, and you're asked to find this k-quick. And it's known that this is a difficult problem in certain regimes for a wide variety of restricted models of computation, in the sense that if you want, for example, an SOS algorithm for this problem, well, it'll take super polynomial time. And it's also a question that has interesting implications from the point of view for statistical computational gaps. So if you assume that finding a planted clique is hard, then you get statistical computational gaps. Another problem that's studied on GNP is to find the largest clique in a randomly sampled Nerdish-Renny graph. And again, there are lower bounds for restricted models of computation, another problem finding critical cliques, and many others such as finding large independent sets in Nerdish-Renny graphs. So there's a wide body of work studying clique problems on Nerdish-Renny graph, but all of these papers prove only hardness under a restricted model of computation. And ideally, we would want to base the average case hardness of these problems on the Nerdish-Renny graph distribution on the worst case hardness. So we would want to say, for example, that planted clique is NP-hard. Unfortunately, we don't know how to do this. And there are, in fact, barriers against worst case to average case reductions of this kind. So instead, in this talk, in this paper, we're going to consider a different problem, a problem that's in fact in polynomial time. The counting the number of clique cliques when k is a constant is in polynomial time, because you can just do it in n to the k time by enumerating all possible tuples of k vertices. There are n choose k tuples that enter the k time and checking whether each one is a clique. So instead of proving that counting k cliques is NP-hard, because it isn't in polynomial time, instead we'll try to like pin down the exact runtime that's necessary. Based on the worst case hardness assumption that counting k cliques is hard on worst case graphs, we're going to show that it's hard on average case graphs. In fact, it's hard on Erdos-Renyi graphs, which is sort of the simplest random graph distribution. So here's the plan for the rest of the talk. In the first part of the talk, I'm just going to overview the algorithmic results. So I'm going to just mention the previously known worst case algorithms for clique cliques counting on adversarily chosen graphs. And I'm going to mention some algorithms that we have in this paper for clique cliques counting on Erdos-Renyi graphs, but I'm not going to be able to get into the details of this. Then I'm going to provide the main hardness result where we show the average case hardness of counting k cliques on Erdos-Renyi graphs based on the worst case hardness assumption. And I'm going to provide a brief proof sketch. And we're going to conclude with some open problems. There are a lot of interesting directions. So first off, algorithms for counting the order of k cliques on general graphs or hypergraphs. So in the worst case, there's for hypergraphs with rank more than or equal to three exhaustive search. So enumerating overall and choose k tuples of vertices and checking whether each tuple is a quick is the best that's known. And for graphs, when the rank is two, there's a speed up using the fast matrix multiplication trick. But these are the best algorithms that are known. So we can ask, okay, what if we take a dense hypergraph so the edge density is constant. So each edge has a constant probability of being in the in the graph is the complexity the same as the worst case and doesn't seem like we can do anything better. So in the worst case, when the edge density is like into the minus alpha for some parameter alpha, then we can achieve, we can get better algorithms we show better algorithms based on greedy random sampling and for the graph case greedy random sampling with the fast matrix multiplication speed up similar to the one in message So a couple of points. One, there are faster algorithms for counting k cliques and sparse orders when you graph, then the worst case is interesting. Two, we can ask, what can be improved. So can we actually speed up any of these algorithms anymore. Well, it turns out that under a standard complexity theory assumption called exponential time hypothesis. So the worst case runtime of quick counting on graphs and hypergraphs takes n to the omega k. So, in any case, we wouldn't be able to speed up the worst case algorithms too much. But we can ask, can the average case be improved. And the main result is then in fact, the average case runtimes really can't be improved that much. In particular, the counting k cliques in dense order spending hypergraphs is has the same exact. runtime complexity as if I were to give you a worst case graph. So it's no easier to count the number of k cliques exactly in a dense order spending hypergraph than if I were to give you an adversarily chosen graph. And also, in fact, we can show that the runtime that we have for the sparse order spending hypergraph counting is optimal in some regimes. If we assume that the worst case exhaustive search algorithm for hypergraphs is optimal. Okay, so I'm going to get more and I'm going to show you some diagrams that maybe make this a bit easier to visualize. But the takeaway is dense order for any hypergraphs, same complexity as the worst case sparse order spending hypergraphs, the faster algorithms for them for the worst case, but and we can prove that they're optimal in some regimes, assuming that the worst case algorithms are optimal. The worst case algorithms that we know. So yeah, so our main result now is that is just this we're going to assume that into the key is the optimal running time for counting k cliques in an order spending hypergraph with rank more than three and into the omega k over three is the optimal running time for graphs for worst case graphs. And we're going to show that order spending to equal counting is hard. So, in this diagram, the y axis is the log of the amount of time the algorithm for counting the number of cake weeks in a hypergraph has to take and the x axis is the size of the cake weeks that you're that you're counting it's k. And if you assume that the worst case algorithm is optimal. You get a sort of a feasible regime. You can derive this blue area in the feasible machine where you can prove that there are no algorithms that run in the in that time that count cake weeks in these sparse orders running hypergraphs and and these in fact before up to the cake week per collation threshold match. So this is the algorithm that we sort of derive in the paper for counting cake weeks for for any hypergraphs. Similarly, you've got another diagram for graphs but here, the region that's open is a little wider but we can pin down the optimal exponent to be live with between omega alpha for nine and alpha where alpha gives this sort of the sparsity of the of the graph. So, just going to put this wall of text here you have to read it. The takeaway is that our main theorem is a worst case to average case reduction, which shows that if you have an algorithm for cake week counting on orders running hypergraphs, then you can use it you can convert it with slow with not too much of a slowdown into an algorithm that counts cake weeks on any worst case hypergraph. But so the punchline is that you get some intricate sort of average case complexity on the problem of counting cake weeks and orders running hypergraphs just from a very simple worst case complexity assumption. Okay, so I'm going to give now a two slide proof sketch where I'm going to explain some of the ingredients of the proof there's not enough time unfortunately to explain the proof. But this should give you an idea. And the starting point is the following. You can express and let's just restrict ourselves to graphs for ease of notation. The cake week count in graph is a low degree polynomial in the adjacency matrix a. So you can write the number of cake weeks is P of a is just the sum over the subsets of size k over the vertices of the product of the entries of the adjacency matrix between every pair of elements in that subset. Okay, so this is a degree roughly case squared polynomial. And there's a classical trick for worst case to average case reductions for polynomials. So if you have some polynomial on a finite field, a large enough finite field, then, and want to evaluate it on the worst case input, then it's going to be able to be able to evaluate an average case input. And the proof is essentially by, you know, low degree polynomial interpolation. And this proof works if the finite field a few is large enough, which is going to be sort of a technical issue that will have to overcome in the next slide. This was recently applied fine grain complexity, which is the context of this problem. And there's a paper from a couple years ago by Goldreich and Rothblum, which applied in fact to cake leak counting and particularly this polynomial. And they run to this issue where they want this average case distribution to be over elements of the of the hypercube and not over elements of fq to the end, where the finite field is much larger than just two elements. And their solution is to replace each finite field element with sort of a gadget of unweighted. And they get a distribution of over graphs that's artificial, they get very good error tolerance, but it doesn't seem possible to arrive at Erdos-Renny hypergraphs just directly applying their method. And so this is the main, the main sort of technical contribution of our paper is to show how to apply these techniques and adapt these techniques to get this very natural distribution over hypergraphs. The main technical obstacle is to map a random element of fq to the end to the Erdos-Renny distribution and in greens of a proof include the following one, we reduce to the k part type cake leak counting problem. And the reason we do this is because then we can use the special structure of this k part type cake leak counting polynomial. There's just a color coding trick to algebraically manipulate the expression and apply some results that we provide on the convergence of bias binary expansions, modulo prime. And this is somehow like how we, how we get from random elements over fq to the end to potentially sparse Erdos-Renny hypergraphs. And we also keep the field size small using Chinese ring and ring. These are just some of the ingredients in this proof. So I'm just going to conclude by summarizing the contributions. The contribution is actually to study cake leak counting under Erdos-Renny hypergraphs. We have faster algorithms in the sparse regime, and we base the average case hardness of this problem on its worst case hardness and this differs from other hardness results for quick problems under Erdos-Renny graphs that I've mentioned like a quick finding large a quick, etc, because we're basing the average case hardness of the problem based on the worst case hardness of the column, which is something that you can't really do for those other problems. And moreover, our results are tight in the dense regime and they're tight in some parts of the sparse regime. And the open problems, some open problems are the following, closing those gaps, so finding a way to match the upper bounce and lower bounce, or maybe extending these techniques to approximate the number of cake leaks, improving that that's hard. If you want to get a good enough approximation of the number of cake leaks, and also maybe improving the error tolerance of the reduction. And for more details, I welcome you to read our paper. Thank you. Great. Thanks, very nice.