 Yes, so the talk on this topic called unambiguous catalytic computation, so suppose you want to perform some computation, but the space available to you for performing that computation is not sufficient, okay and let us say in such a case somebody helps you by providing some extra space, but there are two things, first this extra space is already filled with some content, something arbitrary and something incompressible and the second thing is that although you are allowed to use this extra space for your computation, you can write something on this extra space or you can modify the content, but at the end when you are done, you should restore its initial content, okay. So in such a case a question that one can ask is that can we use this extra space somehow to perform the computation that we were not able to perform earlier and if not can we prove that this extra space is useless. So to formally study this question we have a model called catalytic Turing machine, so what is a catalytic Turing machine? It is just like a normal Turing machine where apart from the input tape and the work tape you also have this extra tape called auxiliary tape, okay and this auxiliary tape initially it can have anything on itself, it need not to be the blank symbols like we usually assume in a Turing machine and in the middle of the competition Turing machine can write on this auxiliary tape or read from it and it is free to do both read and writing there, but when the machine holds the initial content of the auxiliary tape is back, okay. So I guess this explains the name catalytic because the auxiliary tape is supposed to help us or act as like a catalyst acts in a chemical reaction, it helps us in performing our computation and then gets back to its original state, okay. So the class L denotes a usual log space class the set of all the languages which are decidable in order log and size free space by a normal Turing machine. The corresponding class in the catalytic setting is CL. So it is basically the set of all the languages which are decidable by a catalytic Turing machine with order log and size work space and some poly size auxiliary space, okay. So here we have taken the size, here we have taken the size of auxiliary space to be something some end to the C. So this makes sense because if it is anything more than that you will not be able to store the address of some cell of the auxiliary tape in your work tape, right. And this is a pretty reasonable thing a Turing machine should be able to do. So the question that we asked on the very first slide that extra memory is useful or not. In the log space setting in this model it is equivalent to asking whether L is equal to CL or not. So intuitively one may feel that the extra memory is useless because since you have to restore the information at the end at every point of the computation you should remember it in some form, right. And since it is incompressible also, so you are effectively working only with your work space or free space. But the lower bound that we know of class CL indicates that this is probably not true. So the lower bound on the class CL and the upper bound was proved in the same paper in which this model was defined. So the lower bound is TC1 and the upper bound is ZTP, the set of all the languages which are decidable in expected polynomial time. So why does this lower bound indicates that L is probably not equal to CL? The reason is again a very long standing belief about some other classes. So TC1 is known to contain NL, non-deterministic log space and NL contains L, right. So if now someone proves that L is equal to CL that would imply that L is equal to NL is equal to TC1. And it is a long standing belief of the community that L is not equal to TC1. So this is why we also now believe that L is not equal to CL, okay. So this was the deterministic catalytic computation model. We also have non-deterministic catalytic computation. A non-deterministic catalytic Turing machine restores the auxiliary content over all possible sequences of non-deterministic choices and the rest of the things remain same. I mean if the input belongs to the language it will be accepted along at least one path. If it is not it will not be accepted along any path, okay. So NL is the well-known class non-deterministic log space. It's catalytic equivalent or I mean the corresponding class in the catalytic setting is CNL. The set of languages which are decidable by a non-deterministic catalytic Turing machine in order log in size free space and some end to the C size auxiliary space. So for CNL we know two results. We know the upper bound. CNL is a subset of ZPP which is the same as the upper bound of CL. And we also know that it is closed under complement, okay. The Imourman-Chilebzine equivalent of the catalytic word. But this holds under an assumption that there is some language in linear space N which requires linear exponential size circuits. So the reason why we need this assumption was needed because we need some pseudo-random generators to prove that CNL is equal to co-CNL. And those pseudo-random generators exist under this assumption. So let me just quickly go over the proof of CNL subset ZPP. Proof is short and easy to follow and but more importantly the observation involved in the proof is crucial not for just CNL subset ZPP but also for CNL, co-CNL and also for what we have proven in our paper, okay. So the proof goes via the notion of configuration graph, okay. So in a normal Turing machine you have one configuration graph with respect to one machine M and one input X, right. Here it is defined in a slightly different way. Here you have one configuration graph with respect to one machine M, one input X and one auxiliary content, one initial auxiliary content A, okay. And so let's say G of MxA denotes such a configuration graph with machine with respect to machine M input X and auxiliary content A. So a configuration here contains four things. It contains the state, it contains the head, it contains the positions of the head. It contains the work tape content and it contains the auxiliary tape content, okay. So just one more thing. So G of MxA contains only those configurations which are reachable from the starting configuration, okay. Unlike in the normal setting you take all the configurations which are not even reachable. Here we are only taking those configurations which are reachable from the starting configuration. So let's say this is one starting configuration and let's take one more starting configuration which is different from it, okay. So both are the starting configuration for some fixed machine M and fixed input X. The only difference is the initial auxiliary content. Here it is A, here it is A prime, okay. Then the claim is that through no sequences of non-deterministic choices, both of them can land up at the same configuration. To see why this is true, let's take one competition path starting from this middle configuration to a halting configuration, okay. And let's look at this particular competition path. So this is a competition path from a starting configuration to a halting configuration and since it is a catalytic tuning machine which restores its initial auxiliary content, so A should be equal to A2, right. But at the same time this is also one starting configuration and this is also one competition path. So A2 should also be equal to A prime. But since I mean it cannot be equal to both A or A prime simultaneously. So something like this cannot happen, right. So what this basically says is that two different configuration graphs. So here you can take two configuration graph G of M X and A and G of M X and A prime. So what this says is that those two configuration graphs cannot share a common node between them, okay. The node set is disjoint. So this gives us a good upper bound on the sum of the size of the configuration graphs over all possible auxiliary contents, okay. Which is just the total number of configurations possible at all, okay. The number on the right hand side. So the first two numbers are basically the number of different work tape content and the number of different auxiliary tape content and the last three numbers are just the head positions for three tapes, okay. If you divide on both the sides by 2 to the end of the C. What this basically says is that the average size of the configuration graph over all possible auxiliary content is very small. It's something just poly of N, right. Even though for a particular auxiliary content A, the size of the configuration graph can be very large. Something exponential in N, yes sir. Is it also very similar to the group of CL contained in it? Yeah, it is, yeah. Yeah, it is. But does it need to change that? No, no, but- From the original paper? Yeah, yeah. So basically, yeah, they prove CL subset ZPP first, but- Yeah, I mean, this is not my- First, again in their CNL paper. Mikal was one of the co-authors, yeah. What we know now is that for some particular auxiliary content A, the size of the configuration graph can be very large. But on average, it is very small. So in other words, if you pick your A randomly, auxiliary content randomly, then the expected size is some poly of N, right. And I guess this, I mean, so now the ZPP algorithm for the language of machine M is obvious. On some input X, you randomly generate A. And then you look for the accept node in the G of M, X and A, using some polynomial time search or something. So this was the non-deterministic model. In our paper, we have studied a special type of non-deterministic computation called unambiguous computation. So in unambiguous computation, your non-deterministic algorithm accepts the input along exactly one path, okay. Unlike the non-deterministic model where you accept it along at least one path. Here you are accepting it along exactly one path. And if the input does not belong to the language, you will not accept it along any paths, okay. So such type of computation or unambiguous computation is well studied in the, with respect to the normal Turing machines. So class UL is that class, it's unambiguous log space. So it's a set of all the languages which are decidable by an NL machine with at most one accepting path for every input, okay. So we defined this class CUL in the catalytic setting, the corresponding class for UL. And for this, we have proved that CUL is equal to CNL. And this holds under the same assumption as that of CNL equal to co-CNL, right. So CUL is trivially a subset of CNL, it follows from the definition. To prove that CNL is a subset of CUL, what we do is we construct a CUL machine for a given CNL machine. And this CUL machine finds the, looks for the except node in the configuration graph of the CNL machine using two techniques. The first is Reinhard Elander's double counting technique, which I will explain briefly in the next two slides. And we also use some of the tools of the CNL equal to co-CNL result, okay. So the double counting technique was also used for proving the similar result in the traditional setting. So the same result is also known for NL equal to UL. Let me just briefly explain what is double counting technique. So it is closely associated with mini-unique graphs, okay. What is a mini-unique graph? A directed graph G is called mini-unique with respect to sum vertex S. If there is a unique shortest path from S to every vertex, which is reachable from S, okay. And there are two applications, two well-known applications of the double counting technique. The first is that you can construct a UL algorithm for deciding if G is mini-unique or not with respect to sum vertex S. And under the promise that G is mini-unique with respect to S, you can also decide, you can also answer the decide. You can also answer whether some vertex T is reachable from S or not, okay. So let me first define the two parameters that we actually count in this double counting technique. So for a given graph G and a source node S, let S side denote the set of all the vertices which are at the distance at most i from S, okay. And C i is the first parameter that we'll count. It is simply the cardinality of the set S i. And Sigma i is the sum of the distance of all the vertices in S i. So here by distance I mean the length of the shortest path, okay. So our UL algorithm basically computes these values C i's and Sigma i's iteratively starting from C 0 and Sigma 0. And it computes the correct value along a unique path. And while computing these values it also finds that whether the graph is mini-unique with respect to a vertex, with respect to the source vertex S, S or not, okay. So at the outset we know three things about S 0, right. I mean S 0 is the trivia set which contains only S. So for this we know three things, we know C 0, C 0 is 1. Sigma 0 is 0 because the distance from S to S is 0. And we also know that every vertex in S 0, which is S itself, has a unique shortest path from S. So let's assume the same thing for S i, okay. Let's take it as the base case. And we'll see the outline of how this UL algorithm computes C i plus 1 and Sigma i plus 1 along a unique path. So what it does is that it first sets C i plus 1 to be C i and Sigma i plus 1 to be Sigma i, okay. And then it loops over all the vertices of the graph G and checks whether the distance of this vertex U from S is i plus 1 or not, okay. And it does it by generating the vertices of S i along a unique computation path. It can do it along the unique computation path using the three things that we assume about S i, okay. Due to the shortage of time we cannot get into the details there. But using those three things it can generate the vertices of S i along a unique path. And then what we do is that first we check whether any vertex of S i is equal to U or not, okay. Because if any of the vertex in S i is equal to U, that basically means that the distance is less than equal to i, not i plus 1. And after that you also check if there is some vertex in S i whose neighbor is U. If its neighbor is U, then that means the distance is i plus 1. And if there is such neighbor, you increase C i plus 1 appropriately and Sigma i plus 1 also. So basically increase C i plus 1 by 1 and Sigma i plus 1 by i plus 1, okay. And so once you are done with the values of C i plus 1 and Sigma i plus 1, if the graph is mean unique the property of S i that every vertex in S i has a unique path from S. The same property will hold true for S i plus 1 also, right. So you can keep on computing these C i plus 1 and Sigma i plus 1 till C n and Sigma n where n is the number of vertices. And in the end you just say yes because the graph is mean unique. But what if the graph is not mean unique? So in that case there will be some vertex which will be having two minimum, two shortest paths from S, right. And so let's take a vertex which is nearest to the source S, okay. And I mean the nearest vertex to S which has more than one shortest path, okay. And let's say its distance is i plus 1. So when you are computing C i plus 1 and Sigma i plus 1, what you will find is that there will be two edges for such a vertex U, okay, from two different nodes of the S i. So if this is the case you can just halt and reject the instance at that time, okay. So this was for deciding whether the graph is mean unique with respect to S or not. Under the promise that graph is mean unique, you can solve the reachability also. So it is not hard to see that if graph is mean unique, you will be computing all the C i's and Sigma i's properly. And you are bound to see all the vertices which are reachable from S, right. So you can set a flag somewhere and if you see the node that you are looking for, you can set it to true and when you are done with the C i's and Sigma i, you can just output that flag. So which is what we do in the CUL algorithm of the CNL configuration graph. So yeah, so what we are doing here is that for a given CNL machine M, we are constructing a CUL machine M prime such that both of them have the same language, okay. And here is the basic working of this machine M prime, the CUL machine M prime on input x and also very content A. It treats A as the auxiliary content of M, okay. And then it performs a double counting technique on the G of M, x and A with initial configuration as the source, okay. And while performing the double counting technique, it looks for the accept node because if it is reachable, you will find it somewhere and based on that it accepts the input x or not. But this is not correct, there are three issues with this. First issue is that you do not have the space to store C i and Sigma i, right. Because on a particular A, the size of the configuration graph can be very large, something exponential in N. So for storing C i, you may need N bits. And since in CUL machine, you only have some order log in size free space. You do not know any good way to store the C i as an auxiliary space, without losing its content. So this is one problem that you cannot store C i and Sigma i if the configuration graph size is very large. The second problem is that it may not be mini-unique, right. You should work with the graph which is mini-unique. And the third problem is that M prime cannot loop over all configurations of M. So this is required if you remember that while performing a double counting technique, you have to loop over all the nodes and check whether their distance is i plus 1 or not. So here the CUL machine also needs to do the same thing, but it is not able to do. It can do it, but it will lose its initial auxiliary content. There is no good way of looping over all the configurations of M. So the problem, the solution to the problem 1 and 2 is very similar. So first thing we know is that even though for a particular rate, the graph size can be very large. The expected configuration graph size is small, okay? And second thing about creating a mini-unique graph that we know is that which is a simple application of isolation lemma. That if you randomly assign some poly size weights to a directed graph with very high probability, it makes it mini-unique, okay? So under the mentioned assumption, we have two pseudo random generators which help us in creating a small size configuration graph. And a weight function also which makes that configuration graph mini-unique, okay? So specifically, we'll be working with these two pseudo random generators, F and W, which are log space compatible. And so for F and W, we know that there exists some seed, S and S prime, such that it makes, such that for S, G of Mx, A zore of Fs is a small size. And the same graph is mini-unique with respect to the weight function W of S prime, okay? And the solution to the third problem is using a hash function which maps the large size nodes. So since the size of a node of a configuration graph is some n to the c, okay? And you cannot loop over all those nodes. So what you want is some hash function which can map those large size nodes to some small size nodes, okay? So for that, we do not have a single function, but what we have is a hash family where we know that at least one hash function out of that hash family has this desired property that it will map the large size node of a small size configuration graph to small size nodes, okay? So with this, so this is the, okay? So I'll be presenting now the final outline of the algorithm. So m prime on input x and auxiliary content W, it creates all the tuples using all the seeds of S, all the seeds of F, all the seeds of W and all the functions of H, okay? So for one particular S, one particular S prime and one particular HK, instead of working with G of m x a, it works with hashed weighted graph G of m x a's or f s with respect to the weight function W s and hash function HK with initial configuration as the source, okay? You perform the double counting on this hashed weighted graph. And while performing the double counting, you also detect whether this tuple is bad or not. So by a bad tuple, I mean that if a tuple is bad then that means that either S is not creating a small size configuration graph or S prime is not giving you a many unique weight function or HK is not injectively mapping the nodes of the configuration graph on the smaller values, okay? So if it detects whether the tuple is bad or not while performing the double counting and it moves to the next tuple. And if it is a good tuple, then you just finish computing c i's and sigma i's and at the end, you accept or reject the input based on whether you saw the accept node or not while performing the double counting. So all this requires some minor fixes and minor modifications to the actual double counting technique which we do on the actual graph rather than the hash graph. So I cannot cover all of them obviously, but let me just present one major obstacle that we faced. So let's assume that everything goes well till the ith level, okay? So by that I mean that all the vertices in si have a unique shortest path from initial configuration, okay? And your hash function h maps every vertex si injectively to these smaller values, okay? So here we compute c i and c i plus 1 and sigma i plus 1 and sigma i plus 1 in a slightly different way instead of going over all the nodes and checking whether it is at distance i plus 1 or not. What we do here is that we go over all the hashed values. And we check whether there is a, there is a node at distance i plus 1 which hashes to V or not, okay? So for this we can do a simple test that on all the vertices of si, we will generate it in a along a unique computation path like we did for the double counting technique. And while we are generating these vertices of si, we will check whether, we will check that none of the vertices of si hashes to V. And if there is a neighbor of one of the vertex of si which hashes to V, then we increment the c i plus 1 and sigma i plus 1 appropriately. So this is a fine case. The problem occurs when something like this happens. So there is a node in si which hashes to V and there is also a node in si whose neighbor hashes to V. So here we do not know whether V1 is equal to V2, is equal to U or not. Or I mean, so it could be that V1 is equal to U, okay? Which is fine, in that case we just move to the next hash value and check for it. But in case if V1 is not equal to U, then that means that you are facing a bad hash function which is not injectively mapping the nodes of the vertices, okay? So to find this, you cannot simply, so let's say you, at this point you realize that you have some node which is hashing to V and you know that there was one node earlier which was also hashing to V. But due to the way we are generating these vertices, it is not simply possible to go back and compare these nodes to see that whether they are equal or not. So instead of that, what we will do is that we will do this procedure again and again, okay? So we just store that, okay, this is the first vertex, okay? And there is a neighbor of the second vertex which we need to compare. So we run this procedure again and while running this procedure, you store the first bit of this node and first bit of this node. And at the end of the procedure, you just compare whether these two bits are equal or not, okay? If they are not equal, then that means that these nodes are not equal. And basically you are facing a bad hash function, so you move to the next apple. If they are equal, then you do the same procedure for the second bits. And you keep on do it until you finish all the bits. And at the end, you will know whether these nodes were actually equal or not. And if they were not equal, you move to the next apple, okay? So this was one way of detecting a bad hash function. We had to detect whether the graph is many unique or not. That was other hurdle and apart from that, we also had to see that the consistent graph size is small and not. So yeah, so this was all. Thank you. Is it clear that where you are like on the CUL is close on component actually? Yeah, it is, it is. So yeah, it is. So I think the only thing you need to do is that, right? You know that whether accept node is present or not along a unique path, right? So at the end, you can just flip to your answer and that would result in the, I mean, that would be the proof that CUL is equal to co-CUL. I mean probably I am not clear, but this is true, I mean, because I mean this was I mean advised by one of the anonymous reviewer also and we also checked, but yeah, we can, yeah. The simulation needs these generators, so that comes from the assumption that. Yeah, it comes from the assumption. That is exactly the same assumption that is used for complementing C and N. Yeah, yeah, exactly. Is there any possibility that you may be able to do this simulation with a slightly weaker assumption? Yeah, there is. So here what we need is that we need just one seed for which the completion graph size is a small, right? You need just one seed. In case of C NL, co-C NL, you need at least half many seeds, right? So I do not know, I mean, for that I guess we have to look at those pseudo random generator result if we can, we can the assumption there, but yeah, I mean, we thought about it that here all we need is just one seed.