 Welcome everyone to the last session of the workshop. This talk will be by Suhail, who is a graduate student at TFR, supervised by Arkadev. Yes, I was told that he did his undergraduate IT Guwahati. And then he reminds people of Arnold Schwarzenegger. I guess you will see some Arnold Schwarzenegger jokes during his talk. I guess he has done some recent groundbreaking work on separating communication and approximate rank. And today he's going to talk about something related. Thanks, Mokran. The Arnold Schwarzenegger thing, you can contact me offline for that. I will tell you later. OK, so my talk is titled Lifting with Exor. It's not yet clear what my talk is about. That will be more clear next slide. But as you said, I'm from TFR Mumbai. And I work under Arkadev Chatapati. I'm doing my PhD. And this talk is based on work that I did with my advisor and colleague Nikhil Mandi, who is now at Georgetown University. So in this talk, we're going to be talking about communication complexity for its own sake. It's been, the applications of it have been shown since the morning. And I think by now it's a somewhat honorable goal to just analyze communication complexity. And I'll be focusing at just getting a better understanding of communication complexity. And in the simple case where we have two parties, just Alice and Bob. And they're computing a total Boolean function. It's not a search problem. It's just a Boolean function. And it's total. And the way Exor functions come into the stock is that they are really structured. And some aspects of Exor functions lift very easily. Yet they are a very rich class of functions. And there's a lot of unknowns in the world of Exor functions. And there's very interesting open problems that I'll get to some of them in this talk. So I'll start off the talk by introducing some conjectures in communication complexity that will be relevant. After talking about these conjectures, I will switch over to Exor functions and what lifting theorems look like when you have Exor functions. And then eventually I'll tie them together. So let's start off with a communication protocol. This is what a deterministic communication protocol looks like. It's a tree. We've seen this before many times today. But let's just do the dance one more time. So when Alice gets an input x and Bob gets an input y, what happens is you look at the protocol. Here it says this is an Alice node. She has some function to compute here, which will tell her what to output. That function just depends on her input x. And so the x and y will follow the tree. It'll follow a path down to the leaf. And you will accept x comma y if it reaches a leaf that is labeled 1. And what kind of inputs reach a one leaf? So let's take this leaf here. It's those inputs on which the x will answer 0 at this node and 1 at this node. And those y's on which Bob will answer 1 on this node and 0 on this node. So it's a product set. Any x that satisfies that and y that satisfies that will reach this leaf. And let's look at the function that is computed by this communication protocol. We can look at its truth table as a matrix. And the rows are indexed by Alice's input, the columns by Bob's inputs. And the entry at the x comma y location is the value of the function f of x comma y. And we can build the truth table using the communication protocol. So if you take that leaf L that we had seen before, there's some product set which reaches it. So all those inputs actually give the output 1 according to this communication protocol. And summing up over all the 2 to the, at most 2 to the c leaves, where c is the communication cost, we see that this matrix actually has rank at most 2 to the c. And it's important that all these rectangles are disjoint, otherwise you couldn't have just added them up to say that the rank is small. And this gives us our first lower bond. So every rectangle is a rank one matrix. So our first lower bond says that if you have a small cost protocol, its rank must be small. So if you have a function whose matrix has large rank, it must have a large cost protocol. And that gives us the lower bound d of f is at least log rank of f. And this is probably the most fundamental conjecture that follows from this is, is this lower bound tight? Is it the case that for all Boolean functions f, this lower bound is polynomial, polynomially tight. And this is back from 1988. It connects a communication complexity measure with an algebraic measure. And similar log rank conjectures have been known to be true and have been used in lifting theorems quite often before. And they have been very useful in such cases. It would be very useful if this were actually true. The thing is we don't really know whether it's true at all. Our best upper bound is deterministic complexities at most square root of rank. So there's a huge gap between log rank and square root of rank. And our best lower bound to say that this is false is that we know that alpha is at least two a result by Gaspitasian Watson quite recently, actually. But what we do know is that if you restrict this rank decomposition to be non-negative. So we're actually summing up rectangles which are all non-negative rank one matrices. And if you look at the non-negative rank of our matrix, then this conjecture would actually be true. In fact, with alpha equal to two. d of f is always less than log square non-negative rank. So this is the most famous conjecture. There is an analogous conjecture for randomized communication. So let's just spend a few minutes looking at that. So in randomized communication, Alice gets a string of random bits, Bob also gets a string of random bits. They're independent. And again, you do the same thing. The probability that x, y is accepted is the probability that x, y reaches one leaf. Because now these functions depend on these random bits. So x, y is going to reach this leaf L with some probability. But we still have a nice product structure. The probability of reaching this leaf L is the probability that x answers red into the probability that y answers blue. So if you look at the communication matrix, well at least this probability of acceptance matrix has small rank. The probability that you reach a specific leaf L is a rank one matrix by this product property. And the probability that your input is accepted has rank at most two to the c. So although you're not exactly computing the truth table of your function anymore, you're computing something that's very close to it. The probability of accepting. If your protocol is actually a correct protocol, that's very close to the actual truth table. And we know this matrix on the right has small rank and this is pretty much the definition of saying that the matrix on the left has small approximate rank. So there is a matrix that is close by to the matrix on the left which has small rank. So therefore this matrix has small approximate rank and this gives us our randomized lower bound of the randomized communication complexity is at least log of the approximate rank. Please feel free to stop me if you want me to go slower if you have any questions or any clarifications. So this gives us again the next conjecture, the log approximate rank conjecture. This conjecture, it's exactly analogous to the deterministic conjecture, except it has an interesting history also. So it appears in the literature in 2007 in a book by Lee and Shraiman, but it actually appears in some place in 2005. This is the Wikipedia page for communication complexity. And two years before it appeared in the literature, this conjecture was there on the Wikipedia page. I don't know how that happens, but yeah. Yeah, so yeah, I should credit Forge God who is the user who made the edit. Don't know who that is. So what is the nature of this conjecture? It's interesting because it implies the log rank conjecture. So if you prove this, we've got that awesome result. And we know that beta is at least two because of set disjointness. And we actually know that it's at least four because of another recent work. And as far as upper bounds go, approximate rank itself is the best upper bound that we know of right now, order approximate rank. So we don't even know the square root like we knew in the deterministic case. And okay, so these are two conjectures. Let's just look a bit at non-negative rank. So as I mentioned earlier, the conjecture with non-negative rank is actually true for deterministic. This is a well-known result. And you can conjecture that the same thing would be true for approximate non-negative rank. If you want randomized, it's still a combination of non-negative rank one matrices. So maybe approximate non-negative rank is actually tight. There's actually a more reasonable conjecture here that because the lower bound that we have is actually the maximum of the approximate non-negative rank of F and of F complement. We don't even know if those are related yet. So it makes more sense to actually conjecture this variant of it. And in fact, these people did actually conjecture this variant of it. So we've got three conjectures. Let's just cache this knowledge for now. This is the communication complexity portion of the talk. We have the log approximate rank conjecture, which implies the log rank conjecture. And we have the non-negative variants, which are that R of F is order log to the constant approximate rank and this more reasonable variant. So now let's move to XOR compositions. So what is an XOR composition? If you have an N-bit string, let's refer to these strings as Z. When you're composing with XOR, you're coming up with two N-bit strings, X and Y, such that if you take their bitwise XOR, you get Z. And this has, well, intuitively, it feels really nice because it encodes this N-bit string into two N-bit strings. And if you look at any of the N-bit strings by themselves, X or Y, you don't get any information about Z, which is kind of what you want from a good composition in order to have a lifting theorem. And the fact that XOR is good for this purpose was actually known back from 1919 and was actually patented in some form. But the patent has expired, so we will continue to use it. And so this is the formal definition of an XOR composition, what it looks like. So, yeah, as I said, neither Alice nor Bob have any idea about any bit of Z. So if they want to actually compute F composed with XOR, one thing they can do is, again, look at a decision tree for F. They want to find out the ith bit. They can communicate XI and YI to find out the ith bit and continue. But XOR functions can actually do much more. So parity is really hard for decision trees. But you can easily compute parity if you're given it as an XOR function. So you want to find out the parity of Z. All you have to do is Alice computes the parity of X and sends it to Bob. Bob computes the parity of that along with Y. And that is the answer. So what we expect to lift is not from decision trees, but from this model called parity decision tree. So the allowed queries are all parities. So this is what a parity decision tree would look like. You have a hidden input. There's a set. Each node has a subset of N associated with it. And the way you go down this decision tree is you look at the subset at the root and you find out the parity of those bits of Z. If that parity is zero, you go to the left. If that parity is one, you go to the right. And so on, you reach a leaf and you output that. It's exactly the same as you would expect it to be. And which inputs reach the sleeve, it's exactly the set of inputs that satisfy the constraints. And each of these constraints is a linear constraint in F2. It's a parity constraint. So the set of inputs that actually reach the sleeve form a fine subspace of F2. And PDTs can have very interesting properties. This is one of them. For example, if you look at the function R, it's hard for deterministic PDTs because you have to reject just the input, which is the all zero input. Everything else you have to accept. And any leaf at depth D has a large fraction of inputs. It has a two to the minus D fraction of all inputs because each time you're halving, when you say that it satisfies some constraint. So if you want to accept just one leaf, one input, you have to have that leaf at depth N so that it captures a small fraction of inputs. But if you look at randomized PDTs, R is really easy to compute. In fact, in a constant number of bits. So you randomly sample a subset and you find out the parity on that subset. If your string is the all zero string, you're going to get the output as zero. If your string was not the all zero string, it's easy to see that you'll get the output one with probability half. So if you lift this with XOR, what you would expect is that the deterministic communication complexity is omega N and the randomized communication complexity is order one and that's exactly what happens. And R of XOR, or technically, R of XOR is the equality function and it separates randomized communication complexity from deterministic communication complexity. And if you have some other composed functions, as far as I know, there's no other function which will show such a separation. And there's a reason for this. It's that if you're lifting query complexity instead of parity decision tree complexity, query complexity doesn't show a separation between randomized and deterministic. So you wouldn't expect it to, it wouldn't actually lift to a communication complexity separation. So it's these kind of properties of XOR functions that actually are the backbone of this talk. It's what I find fascinating and it's why I like studying XOR functions is that they have these weird properties and if we can find more of these, we will have more of these properties translating into communication complexity. So for example, let's see what else we can get. So we saw that a randomized parity decision tree can compute OR, so similarly it can compute AND. And in general, it can actually compute affine subspaces. That means if you give me some affine subspace S and your question is does X lie in the affine subspace S, then a randomized parity decision tree can do it with very few queries. It's just an adaptation of the OR algorithm. So say you're given some PDT and you're given a specific node in this PDT and your question is given an input X, will it reach this node? So this is again an affine subspace query so an RPDT can do it very easily. And that actually gives it the power to balance PDTs. So if I had a PDT which is actually lopsided, so its depth is really large but it doesn't have that many leaves. Then I can use randomized queries to actually compute the same function with a balanced RPDT. So log of the leaf complexity of this PDT is an upper bound on the depth of this randomized parity decision tree. And it's an open question whether randomization can help any more in parity decision trees. That is, is there any function F for which RPDT of F is much smaller than log of the parity decision, log of the number of leaves of the parity decision tree. So this is one of the open questions that I wanted to highlight. What this will give us in communication complexity if it turns out that there's nothing else that RPDTs can do. This will actually, we would expect it to lift to this statement. Is there an XOR function which is easy for randomized communication but hard for deterministic even equipped with equality queries? So P to the EQ protocols and the real communication model that we saw earlier. So this is, I find this to be a quite interesting question. Recently it was closed for non-XOR functions Chattopadhyayalovit and Vinyals showed that there is a function which shows which is easy for randomized communication but hard for P to the EQ protocols. But it wasn't an XOR function. So it's still an interesting question to find out whether composed functions can have this kind of a separation or whether all composed functions are too structured and that all the randomized protocols are essentially deterministic protocols with oracle access to equality. So this was one of the open problems. This upper bound and RPDT, it's even, it's a non-adaptive upper bound, right? Is the upper bound true even for non-adaptive randomized parity decision trees? I don't see how that would follow. I don't think it should have. It's so to understand PDT is better and to find out. So I mentioned that there that you can lift with XOR. To understand what lifts with XOR, we have to understand how we will represent a function using its parity decision tree representation. So how many people here are familiar with Fourier analysis over Boolean Fourier analysis? All right, cool. Okay, so I'll just do a quick run through of this. So if let's take, for example, the AND function. Here I've taken it in the, where it's taking inputs in plus minus one instead of zero one. And the reason I'm doing this is because parities, we want to deal with parities and parities are very easy to deal with in the plus minus one world. The parity of Z1 and Z2 is just the product Z1 into Z2. The number of minus ones, whether it's even or odd, is going to tell us what that answer is. And Fourier analysis tells us that, given any such function, there's a unique representation as a multi-linear polynomial. And the measure of AND that we care about, well, there are these two measures. One measure is the sparsity of the function, which is the number of non-zero coefficients. So in this case, the sparsity of this three bit AND is eight. And the other property is the L1 norm, which is the sum of the absolute values. In the case of AND, they all have the same absolute values. So it's just eight into one by eight. So the L1 norm is one. And these, so in normal decision trees, you have degree playing a major role and approximate degree playing a major role. In this case, the best measures that we have are stuff like sparsity and L1 norm. So we knew that every leaf of the parity decision tree is in a fine subspace. So we can add up these leaves. So if you have a depth k, r, p, d, t, each leaf is an affine subspace with k constraints. That's like an AND of k variables. So it has sparsity two to the k. And you're adding up at most two to the k leaves. So your sparsity of this function is at most two to the two k. And for similar reasons, it's L1 norm is at most two to the k. And if it's computable by a depth k, r, p, d, t. So an r, p, d, t is just, you're first tossing some random coins and then choosing a parity decision tree. So it's a convex combination of p, d, t's. So an r, p, d, t is that you could look at it as you have a distribution over many p, d, t's. This one is chosen with probability p1, p2, and p3. And the probability that z is accepted is equal to p1 into the first p, d, t of z, p, d, t, one of z plus p2 into p, d, two, two of z, and so on. And each of these has L1 norm at most two to the k. So this whole thing also has L1 norm at most two to the k. And it approximates the function that we want to compute because the probability approximates the function. It's clear? Cool, so again we get an approximation lower bound. So it's L1 norm is at most two to the k. It's approximate sparsity is not yet clear because this format of writing it will actually have a lot of terms from, it depends on how many parity decision trees we have in our support and that could be huge. So we don't yet have an upper bound on approximate sparsity based on the complexity. By not yet, I mean not by this slide. It will come soon. So with these measures, let's see how it lifts. So lifting is really nice with XOR because for example, the sparsity of F is exactly the rank of F of XOR. There's no calculation that we need. It's very simple. There's no complex analysis that we need to do. The approximate sparsity, which is just the minimum sparsity among all nearby real functions, also lifts to approximate rank with a slight loss over here. And that's needed. Okay, I just didn't mention this. So what this implies is, if you actually have a result that your PDT is, this is the log rank conjecture, but in the PDT world, that it's at most polynomial in the sparsity, in the log of the sparsity. So if this is true, then because of this lifting, it implies that D of F is at most polynomial in log rank of F because sparsity of small F is rank of capital F. And similarly here, you would have, if the analog of the log approximate rank conjecture is true in the PDT world, then it'll also be true in the communication world. There's more nice lifting that happens here. So the L1 norm of F is exactly the same as the L1 norm of F of XOR. Now it might be a bit weird that we're talking about L1 norm of F of XOR, but that is not a mistake over there. A reviewer once corrected us saying, you probably don't mean that, but it's actually what we meant. The reason we meant it is that there actually is a conjecture here that says that randomized communication complexity is at most polylog in the L1 norm. So we, so the L1 norm also lifts. The approximate L1 norm also lifts exactly. So L1 norm is really nice. This was supposed to be justification for why the L1 norm of F had a good fashion. So this is, yeah, and it's... Right, so we don't have the other direction of this. So the correct measure to use here is probably something called the trace norm, which is pretty much equivalent to this. And it does have, the lower bound does actually hold over there, it's at least the trace norm. But back when this was written, so Grolmish actually had reason to look at this. In the multi-party setting, he actually showed that if you had enough players, this conjecture is true. And he didn't find any counter-example in the two-player case. And so it's a, yeah, it was a fair conjecture back then. And probably the best of all the lifting theorems we have, definitely the best of all the lifting theorems we have for XOR functions, is this result by Hatami, Hoseini, and Lovett, which actually says that your deterministic PDT complexity does lift to the deterministic complexity of the composed function. So the takeaway from this isn't that all these measures lift, it's just that there are a lot of measures that lift. You don't have to remember these exact statements. You just want it to be complete. There are actually other measures that also lift. There's the weakly unbounded randomized complexity. And that also lifts from PDT, our PDTs to randomized communication. Yeah, but that was, it's a bit esoteric. So what this last theorem means is that if we already know that sparsity is exactly the same as rank, and we know that PDT is polynomially related to D. So this, the conjecture that LRC is true for XOR functions is exactly the same as the conjecture that PDT of F is polynomially related to the log of the sparsity. Now, in this talk, I'll want to get to approximate rank. And I've gotten from our PDTs to approximate spectral norm. So all I have to do is go to approximate sparsity, and then I can go to approximate rank. So this next slide is just going to be connecting approximate L1 norm and approximate sparsity. And it turns out that they are the same thing up to a log n factor. So up to an additive log n, log of approximate sparsity and log of approximate spectral norm are the same. And the reason is quite straightforward. So if you look at a function and you write it as a polynomial, so z sub s is the monomial where we multiply all z i i n s and its coefficient is g hat of s. What I want to do now is I'll just sample from this polynomial g. So I sample a monomial with probability proportional to the value of the coefficient, to the absolute value of the coefficient. And so for example, here we have this, we've sampled one monomial and we're normalizing it. And this sampled polynomial on expectation is exactly our polynomial, because we're just sampling coefficients from the natural distribution. So there's some details here, but I think I'll just rush through this slide. So as you sample more and more monomials and take their average, we know that the expectation is g of z. So we expect that it's actually concentrated around g of z using something like a Chernoff bond. And that's exactly what happens. So if you take t to be large enough, which is order spectral norm squared into n, then the sampled polynomial approximates g on all its inputs. And this sampled polynomial, this sampled polynomial actually has small sparsity. It has sparsity exactly the number of samples or at most the number of samples. So we actually got this, we have this result that the approximate sparsity of g is related to the spectral norm of g. We have that statement. And therefore the approximate sparsity of f is related to the approximate spectral norm of f. It's a bit of a messy slide, so I don't want to keep it up for long. But what's important here is that the log approximate rank conjecture for XOR functions is equivalent to the log approximate L1 norm for XOR functions. And this implies Drollmisch conjecture, because if your randomized communication complexity is almost, and is always polynomially related to the log approximate rank, and therefore it's always polynomially related to the log approximate L1 norm, then it definitely is not going to be higher than the exact L1 norm, because approximate L1 norm can only be smaller. So this does imply Drollmisch's conjecture, even actually for non-XOR functions. So let's just get back to the familiar world of communication complexity. We had these conjectures already with the log rank conjecture for XOR functions. We're going to add Drollmisch's conjecture, which is implied by the log approximate rank conjecture. And it apparently also implies the log rank conjecture. And just for fun, we'll actually add one more conjecture, which I'm not going to talk about. But this slide is really to show you that this field is really rich. There are a lot of interconnections between various measures, and these are the statements. So we've covered communication complexity, the conjectures I wanted to talk about. We've covered a basic intro to parity decision trees. Now, let's just analyze the correspond, this conjecture over here, Drollmisch's conjecture. So this is actually the route that we took while we came up with the counterexample to the log approximate rank conjecture. It's never been presented like this before because I never actually had the time to do that. It was always shorter talks. But this time, I have enough time to go about it the way we did, yeah. So we were actually analyzing the Drollmisch's conjecture. Back then we didn't even know that it was implied by the log approximate rank conjecture. It seemed interesting in itself. Why would R of F be related to the spectral norm of F? And so its corresponding PDT question was what we analyzed. Why would a function with small Fourier L1 norm be easy to compute with randomized parity decision tree? So the easiest example of a function with small Fourier L1 norm is an and. It has L1 norm one. Or in generality, we have affine subspaces that have L1 norm one. And what is something, so these are actually easy for randomized PDTs. So these are not going to give us a counterexample. What we thought the next simplest example would be the sum of a few disjoint ands. Since they're disjoint, disjoint in the same way that rectangles affine subspaces were disjoint in the tree examples. So if they're all disjoint, then we can actually take their sum and we'll still get our rebound on the L1 norm of the function. So for example here, let's take S1, S2 and S3 to be three different ands. So S1 is saying Z1 is equal to zero and Z2 is equal to zero. It doesn't care about Z3. So these are subcubes. We've seen this before because we're in a lifting theorem workshop. And all of these are disjoint because if S1 is zero, then so if something is in S1 and S2, then that means Z1 is both zero and one, which in these are bits, not qubits. So let's just generalize this example to a larger case where we have M different subcubes. If I want to make them all disjoint, I can use the exact same trick from the last slide. For any two subcubes, say S1 and S2, I have a coordinate Z12, which should be set to zero in S1, but to a one in S2. So this means that S1 and S2 are disjoint. And in general for any SI and SJ, there will be a coordinate ZiJ that makes sure that SI and SJ are disjoint. And this example can be rephrased in a much prettier representation, which is the sync function. So the sync function has a nice description. It's on M2's two bits, which is exactly the same as the number of bits we needed in this example. To be thought of as you have a complete graph on M vertices and you put your variables on the edges of this graph. So Z12 is put on the edge between V1 and V2 and ZiJ is put between VI and VJ. And when you assign a value to these M2's two variables, we want to know what the output is. So let's just assign some values. What it does is it actually orients the edges. So if a particular edge, the variable there is set to zero, we interpret that as meaning the edge points from the larger vertex to the smaller vertex. And if it's one, it means it points from the smaller vertex to the larger vertex. So this gives us a complete orientation of all these edges. And the question we want to ask is, is there a vertex in this graph, which is a sync? And this is quite similar to the, and sync has been studied before, but never from this angle. It's query complexity was studied for, I'm forgetting the name of the conjecture right now. Yes, the under areas work up conjecture, yes. And some people have also studied this because it's like a converse winner. You're having elections between these people and you want to know if there's somebody who's won all their elections. And yeah, some people have also told me that sync is a very pessimistic name for this. You should be calling it the winner function or something like that. But as we'll see, sync is a pretty nice name to keep for this function. We'll see it later on. So the definition is this. If there's a sync in this graph, then we output one. If there's no sync, then we output zero. And we know, because this is the same function as the function we saw earlier, that this is basically a sum of M subcubes. The subcube where V1 is a sync, the subcube where V2 is a sync and so on. They're all disjoint because VI and VJ cannot both be syncs. There's an edge between them. So the spectral norm of sync is at most M and because of the equivalence between approximate spectral norm and approximate sparsity, we know that the approximate sparsity of sync is at most M to the four. So both of them in the log, when you take the logs, both of them are just log M. And you lift it with XOR and these measures just simply lift because XOR is really nice. So even for sync composed with XOR, the L1 norm is at most M and the approximate rank is at most M to the four. One further measure that this has, which doesn't that directly lift from the PDT function, is the approximate non-negative rank of F is actually small too. So why is this? This is quite simple. So sync of X comma Y is actually a sum of sync VI of X comma Y, which is asking is VI a sync, right? And finding out whether VI is a sync actually just, it's equivalent to just doing an equalities. For example, if you look at V1, V1 is a sync if and only if Alice's bits on these edges is equal to Bob's bits on those edges, then their bitwise XOR is all zeros and V1 is a sync. Similarly, every vertex being a sync is easy to compute via randomized communication protocol. So each of these matrices are actually matrices that come from a small randomized complexity communication complexity protocol. So each of these have small approximate non-negative rank. And since you're just summing them, the whole thing also has small approximate non-negative rank. So that M to the order one is actually just M to the five. All right, so what we proved in our recent result was that the randomized PDT complexity of sync is omega M. The randomized communication complexity of sync of XOR is omega M and that the particle number is also omega M, but I haven't introduced that, so it's useless. So what does that do to our conjectures? So these conjectures are all now false. So log approximate rank conjecture is false because log approximate rank is order log M, but the communication complexity is omega M. Similarly, Grollmische's conjecture, the parity kill number, and the strong log approximate non-negative rank conjecture are also false. Well, the next thing to see is the general form, the more reasonable form of the approximate non-negative rank conjecture. Well, there we weren't as lucky because the lower bound method that we used, it actually shows that if you take the complement of sync of XOR, that has large approximate non-negative rank. That's actually how we show that sync of XOR is hard to compute for randomized communication. So there, the bound that you would get there is omega M, which is tight. So that conjecture is still true, but sync actually syncs one more conjecture, and I couldn't have said winner is one more conjecture, so sync is a good name. And that's the quantum log rank conjecture. So log approximate rank is also lower bound for quantum complexity, and this conjecture asks whether it's polynomially redacted. And soon after our work, there were two works that went ahead and disproved this conjecture as well. Markrand is one of the people. And, yeah, so this was the sunken conjectures. So let's talk a bit more about this conjecture, which is something that I am very curious about because I feel that there might be an avenue to continue and disprove this, but yeah, I'll show you what I mean. So what we did was we showed that we came up with a function whose one inputs were actually a disjoint union of subcubes, a disjoint union of M subcubes, but the problem was that its zero side had large log approximate, not negative rank. So if we could come up with a function where F inverse of one is a disjoint union of few subcubes and F inverse zero is a disjoint union of few subcubes, while keeping its RPDT complexity large, then you would have good reason to believe that the general log approximate non-negative rank is also false. But it turns out that you can't do it with the union of subcubes. There's an elegant proof that follows from some work done by R.N. Furt and Hausler. And the corresponding conjecture, if you don't just limit yourself to subcubes, because remember, subcubes weren't the only thing that had smaller spectral norm. Even affine subspaces in general had small spectral norm. So if you try to extend this to disjoint unions of affine subspaces, you might actually be able to find an example like this. And the proof that you can't find an example with subcubes does not follow through to the affine subspaces case. So I think it would be very interesting if you could find out whether this is possible or whether it's not. And so to summarize what I wanted to say, we know that XOR functions behave well according to some measures that lift very easily. We know that PDTs are not well understood and there's a playground there that we can continue to search in. And there are many juicy questions. One is the question that I mentioned at the beginning. Are randomized PDTs the same as what I've written there as conjunction PDTs? So the measure of leaf complexity of a PDT. So if you have a PDT with small leaf complexity, you can actually balance it using affine subspace queries is what I said. So you don't actually need to go to our PDTs, this is exactly equivalent to this other model where you actually have affine subspace queries. And the question is, are these two models the same? Because these two models are. And if you think parities are really messy to work with, there's a conjecture here that doesn't even involve parities. The question of our conjunction PDTs the same as randomized conjunction PDTs. So the queries you can make here, you can query a single bit or you can query a conjunction of bits and does randomness add any power there? We know that without the conjunction, randomized doesn't actually add power. And as far as I know, this is open. Another open problem is can we close the avenue that I just mentioned on the previous slide? Can we better the closeness between randomized complexity and approximate rank? So we show that log approximate rank and randomized complexity can be far apart. And in doing so, we showed that syncs randomized complexity is up to fourth power close to its approximate rank. And we want to know whether you can actually make this closer because the best upper bound we have is order approximate rank itself. And another very interesting question is can we attack the log rank conjecture using XOR functions somehow? Because the way we used, so we said that sync was a summation of easy randomized complexity problems, right? And what this meant was that this has small approximate rank, so this summation also has small approximate, should. It has small approximate rank, but its randomized complexity increased a lot. And that's what actually gave the separation. If you replace these with stuff that have small deterministic complexity, so that means that these have small rank, exact rank. So that means that this would also have small exact rank. And now the question is, can its deterministic complexity be much larger? And in that case, that's actually not true. If this is a summation of stuff that has small deterministic communication complexity, then this function also has small deterministic communication complexity. And yeah, so the log rank conjecture is a Titanic conjecture, so it makes sense that it's not syncable. And yeah, there are other questions. And I haven't introduced the necessary measures, but there are many more questions that come up. If you look at PDT's and you just wonder, what are these strange properties of PDT's? And yeah, I think I'll open the floor to the public. So can you use the word Titanic in this context? Because I think that you actually think it is false. Oh, thank you, that's a good question. So no spoilers yet, I don't know what happened to the Titanic. So I actually think it could be true, but that's what I thought about the log approximate rank conjecture as well. After we had disproved it, we just went through our proofs wondering where we went wrong until we were convinced that no, it was actually not true. The same thing, we're still at that stage for the log rank conjecture. We don't know whether it's...