 So the thing is, typically, when you use SAT solvers, you need to really think of the SAT solvers as something that you have to interact with. And you really need to retrieve as much information as possible. Typically, so if you think of the textbook definition of SAT, it's you have a set of clothes and you have a decision problem. The answer is yes or no. So typically, you have this set of clothes, and for the first set of clothes here, the answer is yes. And for the second set of clothes where you have two unit clothes, A and not C, the answer is no. So this is typically what we have in the textbook. SAT is a decision problem, the answer is yes or no. Now, this is not how we see that as solver designers. Typically, we have a set of clothes, so the input is the same, but if there is an assignment of the environment that satisfies those clothes, you provide that assignment, else you answer and SAT. This is the first difference because you provide a witness, a solution. This is different from the textbook definition. So typically here, you are going to provide the assignment, A, B, C, or you are going to answer and SAT. But that was the SAT solvers of the 90s. Now, and so another thing is since 2013, if you answer and SAT, you can also provide a proof. So you can just have a proof on the side that a third party can check. So it means that you can check easily that has always been the case, the SAT answer. But now, since 2013, you have an efficient way to output a proof and to have a tool that will be efficient in checking that proof. This is the famous draft proof that Matteo showed yesterday. So this is the second point. Now, there is another way is if you satisfy the clothes you provide the certificate, but if you do not satisfy, you return a subset of the clothes that will tell you... So suppose you have millions of clothes, the unsatisfiability comes from 20 clothes. It will return those 20 clothes or a smaller set of constraints with those small sets. So typically here, so the answer is ABC if it's satisfiable, or the full set C2 which means that the inconsistency is global. This is the worst case. It's not located to one part of your specification. It will be global and then if you want to try to solve it, it means you have to relax something and it's not probably a bug somewhere. So the idea is we call this set an unsatcore. So that unsatcore may not be minimal, but it's a subset of the original set of clothes that is unsat. And typically all the major improvements in SAT technology, in SAT algorithm has been made because people have been using this. There is the last part which is the fact that you can provide... So I tell you, you can add assumptions to add new variables, typically one variable per clothes or two sum clothes, and then you will be able to ask, given those assumptions, given the fact that you suppose those literals to be true or false, is it satisfiable or not? And if you do that, it allows you again to have this information, typically because you are going to use... to add more variables to your problem, but you will get an answer if it's unsatisfiable in terms of your selectors. So here is how you do it. So I added that selector variable in front of each clothes. So here I took the selector S1 for the clothes from C1 and the selector for S2 for the clothes from C2. So to retrieve C1, I just need to take that formula and to assign not S1 and S2. S2 will satisfy those two constraints, so that will be ignored. And if I want to retrieve C2, it will be C and not S1 and not S2. So it means... and this was something provided by Minisat in 2003 and that people are... if you know how to use it, it allows to really solve problems very interestingly. And typically, so here what you can do is you can keep all the clothes you learn when you use those assumptions because what you learn will contain those assumptions. So you do not have to clear all the things. So this is very important. This is called incremental set oracle. And if you want to solve problems very efficiently, this is also something you need to know. Okay, so now suppose I have only set for J, I have only a set solver, a PB solver, and I want to solve max set. How can I do it? Well, I mean, so typically the way max set is is that we have a penalty that is taken when you allow to violate clauses, but you have to pay a price for that. Okay? So there is a special white which is infinity for odd clauses, but just consider that there are two set of clauses. One are the classical clauses that you have to satisfy, and then there are other clauses for which you have a price to pay if you want to violate them. And so the point is you want to find a model that will minimize the loss. Okay? You have to satisfy as many constraints as possible. Maybe not just as many, that is the classical max set, but in weighted partial max set, you want to satisfy all odd constraints and to minimize the penalty you get by violating soft constraints. So here is an example. You have not A or B with six and not B or C with eight. Those are the soft clauses. The odd clauses are that you have A and not C, and you will get for this assignment A, not B, not C, the weight will be six. Why? Because A, not C, so it's okay. So we are violating that one, which is six, which is less than violating this one that would cost eight. Okay? So how do you... You can solve it just like we saw a few minutes ago with linear search. And so how do we do that? We add one selector for each clause here. And then we just ask, okay, can you give me a solution for this? And then it will be... Obviously you can just satisfy all the selectors and it will work. So here it gives me a model. So here I have the negative value of the selectors. So I'm only interested in the positive value. It says if I use B1, B3, B4, B10 and B12, this will be... Those are typically the case. I relaxed them. Okay? So what I tried is, okay, you relaxed five selectors. Can you solve it by relaxing less than five selectors? Okay? So here the... So I have five selectors, so now I'm using that new constraints. So it says, oh, I can do it, but now here is a solution where I only relaxed two selectors. Okay? Can you find something better? It will tell me no. And now I solved maxat. I know that I need to relax two constraints. Okay? This is exactly what we saw for optimization in Sat4j. So this is exactly how Sat4j works. Now this is the classical way and easy way to do things. Okay? Now what is very interesting, so this is typically the same thing, and just for your information, in 2001, in 2009, Sat4j was considered a state-of-the-art unindustrial problem for maxat because there was industrial problems, there was not before, and because he was able to use very big maxat problems using that technology. And because we do some adjustment to reduce the number of selectors we add in the solvers, so typically if you have just unit, you do not add new variables, you just put them in the objective function. So there are some technical things. But the issue here is that you still have a coordinating constraint. So you need, in some ways, to represent it. So one very impressive work has been to use, so I tell you what can you get from the SAT solvers, you can get that unsatcore. And in 2006, there was a presentation of a maxat solvers that was based on retrieving those unsatcores and using, solving the optimization problem, not from the sat side, but from the unsat side. And actually this is, so it took some time, but this is currently one of the best ways to solve optimization problems with SAT, which is completely different from anything you have seen before. And so how do you do that? Typically you ask the SAT solvers, you take the formula, you said, is it satisfiable? It says no. But no, it gives you a set of constraints that isn't satisfiable. And then you are going to tell, okay, I'm going to relax those ones, and now is it satisfiable? And you do that again and again. And what is very nice is if you know that the value, so I call it mean unsat here, okay, how many constraints do you need to falsify, you know that you will require exactly car plus one calls to the SAT solvers. When you are lucky, if you do linear search, you may get only two, but you can have a huge number, require a huge number of calls to the SAT solvers. So here there is a warranty that you will get exactly car plus one calls to the SAT solvers if you know that your answer is k. So this is very nice. How does it work? So same examples of previous one. You call the solvers, it tells you this is unsat, okay. Then you are going to add here the selectors, and you add a new constraint saying, okay, you are allowed to block, to violate only one of these guys, okay. Now you ask, okay, is it satisfiable? No, okay. So now you are going to add, so here you see I'm adding new variables, but there are improvements where you just need one, right? And you are going to add a new constraint saying, no, you are not allowed to just block one of these variables. And then you ask, now you are sat. I mean two unsat calls have been able to solve the problem. And this is really something, so you need, again, you need to find ways to represent correctly here those constraints. So in that case, we are not in the case where we have that cardinality constraints that prevents us, you see. We have, so typically the case, I do not have a huge cardinality constraints that creates that pigeonhole problem, okay. Here I have many small cardinality constraints that are not related to the whole thing. So the binary search is problematic, so if you look at the previous case, I had cardinality constraints on all the added variables, all the selectors, which means that that constraints, like pigeonhole, was firing for all the problem. Here I'm restricted to each core, which makes a difference, because now it's something more local. So in practice, unsat is more difficult to find unsat, but it also depends on the kind of unsatisfiability. But yes, and that's the reason why I wouldn't have bet at all on this approach, but this is very efficient. So this is one way, and there have been many different improvements, and this is currently the state of the art, the latest version, because this is without weight. Then if you want to take weight into account, this is another story, it's much more complicated, but it has been done, and it works fine. This is the state of the art. Now I want to show you another way to play with cores, and here the idea is you still use cores, but then you are going to solve another problem that is called minimal heating set, and you are going to ask, typically a MIPS solver, to do it, and this is also something that works very well. So how does it work? We still have the same example. I still get the first core, and here is my core, and now the thing is, does everybody know what an heating set is? So when you have a heating set, it's picking one element on one minimal set that covers each of the subset. So typically here I have only one set, so I can pick any of this element. Here I take, for instance, B4. I'm going to tell, okay, I'm going to relax before. Are you satisfiable? So before here is satisfied, and it says no. Look at this core. I cannot do anything. So now I have two sets, so I have to take, typically I have some elements at the section of the two, so I can pick B1, which is in the two, and that's it. So here I'm going to take B1. So is it okay? I ask myself, no, look at this one. This is also a core. So I have now my core here. So now I have to find a minimal heating set among those three elements. So I have to take B1 or B2 because it belongs to those two, and then I have to pick one of those. So here for instance, it will be B2 and B5. And so do we have, and now the problem is set, so we know that we have to use to relax two things, and we have the solution. So this is yet another approach where you use the unsat core, but in a different way. You see, because here, when you do this, if you are lucky, suppose here the cores would contain also B1. You would have just solved the problem and that's it, it's that. So you do not have, if you are lucky, if the intersection of the cores is important, you may just, with that approach, solve very quickly the problem because you relax, you are not going to change the thing, it shines. So this is something... So the first way to solve the optimization problem was obvious. Those two approaches really rely on the fact that you can get the unsat core from the solver. So it's more or less difficult, so you need to have an efficient solver because you need to get the core quickly. And then the quality of the core, it might be not minimal, but some solvers are better than others to try to figure out something that is closer to the minimal one. So those are really different ways to solve problems and in which you see that it's an optimization problem, but still using, I would say, innovative interaction with the SAT solver. You cannot think about it but you can think about just the decision problem that that is. This is something that has been available because we have unsat cores and typically unsat cores, so they were existing already in 2006. So this is something that came out from the community. Okay, we are going to see that sometimes you need to be lucky or sometimes, you know, solving problems with that that you expect that you are going to be lucky. So there is that problem called the Hamiltonian cycle problems where you have a graph and you try to find cycles going from all of the vertices and here what you need for all the edges and so what you need is to typically have those constraints. So here you see a kinetic constraint with equal one, equal one, and here you have typically n square numbers of constraints that needed to encode this. Okay, and so this is an issue and typically because of these constraints, connectivity constraints. So what people are doing, typically what happens if you just, so you have the first one, the first constraints, and the issue are those ones. Okay, so you can easily ask that you will get, if you just give these two cycles that you have the out-degree is one and the in-degree is one, what you can get is you can get something like this. It means that you will get sub-cycles. So you won't get a unique cycle, you will get several sub-cycles. So all the constraints that you add are here to prevent this case to happen and to have only a big one, the same one Hamiltonian cycle. So the thing is if you think about it, what is a SAT solver in that case? The solver, you fill it with a few constraints and then you ask, is it satisfiable? In our case, it gives me a cycle and then what I'm going to check is, is it really a Hamiltonian cycle? If yes, I'm lucky, you know, it's like I try to find an optimal solution. Oh, it's the optimal solution. I still have to prove it is optimal, but if I'm lucky enough, I can get, okay? And so there is a bit, the same principle here and the idea is if you, this is not something that you expect, you will just provide a new set of clauses that will prevent this case to happen again. So typically here you are going to say, well, okay, something that is 1, 2, 2, 2, 3, 3, 7, 7, 8, 8, 1. So 1, 2, 2, 2, 3, 3, 2, 7, 7, 2, 8, 8, 2, 1. This is one cycle in one direction and in another direction. So we are breaking that cycle here. For this one, we are breaking this cycle into two things. What are we doing? We are constructing that way, the connectivity constraints lazily. So suppose we need an n-cube number of constraints for the connectivity constraints. Just because we are lucky and we expect one assignment to represent the Hamiltonian cycle, we may be lucky and get it by just generating maybe no constraints. We would be very lucky, but maybe just n constraints instead of n-cube. And this is something very important. So that driven, we call driven by the SAT solver because if you are... And here, why is it driven? Because you are given... The solver gives you, oh, I found this assignment. Is it correct? You analyze it because you know what you are looking for and you generate a refinement of the encoding and these dos and dos constraints will make sure that you won't find the same cut or example again. So you may find one that is slightly different between maybe those three and those three... Those ones here and those three here. But anyway, you will just use your solver as a lucky oracle and you will just check that you know that you didn't put all the constraints, so you need to check and you want to do it efficiently. And so this is called counter example guide, abstraction refinement. So the idea is typically you are not going to... So here the case is you know that you cannot just generate the full CNF encoding because it will be too large. So you just use some part of it and you are going to enter a process where you are going to have a loop and depending on the outcome of the solver you are going to add more constraints to the abstraction. So here I am using under abstraction or over abstraction which is not what some people... When I say under abstraction, it means under constraints. Under constraints means you have more solutions. Over abstractions means over constraints. There are less solutions. And so what does it mean? It means that if you have less solution, if you have one, it's okay. If you have more solution and you are inset, you know that it's inset. So that's the point. So typically I will use Cigar over or Cigar under for this. Okay, so this is typically how it works. You take your formula, you start with your abstraction, you check if it's inset. So this is the under abstraction. I have a shortcut. I know that if this is inset, I'm supposed to have more solutions. So if it's inset, it's finished. So if it's sat, then I check using some criteria. And if it's okay, if this is what I want, it will be I finished, it's sat. Else I have to refine. Typically in my case, this is not an Hamiltonian cycle. So I go back to check and I loop. So typically here you replace the space needed by the encoding by a loop that may take forever, right? But the expectation is that if you are lucky at some point, either you will eat the shortcut here or you will find eventually something that is okay for you that will correspond to the formula to solve. So this is Cigar under. Now we have the other one round. We have the Cigar over. So here the shortcut, if it is satisfiable, I'm done. And if I'm inset, then I need a criteria to decide whether I'm sort of equivalent to the original problem, in which case I can inset. And then I refine and I will be able to do a loop. So this is typically the way you are doing bounded planning, bounded checking and so on, because you are going to increase a bound until you find a criteria, typically a value that will tell you, well, if you cannot find a plan above n, then there is no chance you can stop and consider there is nothing. So typically you have a syntactic way to know that you are going to stop. So those are typically the two things you can have. So what does it mean? It means typically if you know by default that your problem should be satisfiable, then you should use the one with the shortcut. If you know that you are probably unsatisfiable, you should use the other one, because this is where you can finally decide. And typically if you have what is nice with that approach is that each time there is a new solver available, you just pick the new one and you improve it in your solution. And just to give you, in 2001, Henri Coutts, he was using Blackbox. It was a bounded planning system. And he wrote supercharged with shaft. He had a solver that was two of those of magnitude faster than what he used to have. It just changed the engine and nobody could beat him. So this is really the main interest of that approach. You have to use that engine and you can just reuse what exists. But the issue is, okay, you might be in a case where you do not know in advance if you are sat or unsat. And typically sometimes, well, you just enter, you have a huge number of steps before being able to stop. Sometimes you know that you will never reach that number of steps because you are solving a p-space problem, for instance. So now our proposal, in that case, is sort of to use a recursive version of the approach. Yes, it looks ugly, right? So how does it work? You are going to use an over and an under abstraction. So the idea is we use, if you just use that pass, this is the classical over abstraction. Okay, like bundle mode of shaking and so on. Because you have a shortcut for sat and then you do the loop, you refine. Now what we are going to do is, when we suppose, if we are not done, we have what we call RC is for recursive condition. Okay? And typically what we are going to do is we are trying to find an under abstraction that will, where we reuse, this is the reason why it's recursive, we reuse the main procedure and reduce an abstraction of the problem. And we expect it to be unsat. And if it's the case, we contaminate and we know it's unsat. If it's not the case, we go back and refine under the loop. So typically here what we try to do is we have the classical sat shortcut. And in some cases, if we know how to do it, if we believe that we can find, we believe it's unsat, but we are not in the case where this condition holds, we are going to reduce the original formula. Okay? In such a way that it preserves the unsat shortcut. And we will try to do it. Okay, so this, yes? Typically that, because here you do not know if you are equivalent to, if the unsat for this over abstraction is equivalent, means unsat for the original problem. You do not know. Yeah, but how do you check it? Typically it's a constraint. It should be, it's not logical equivalence. It's typically a bound that you reach, you know, when you do boundary checking until you reach a given bound, you are not sure that if unsat means no bug or not. And so this is typically a sort of a test, a quick test that allows, that tells you, yes, you can unsat in this abstraction means unsat in the original problem. Yeah, so what kind of tests do you do to check that? Like a syntactic test? Yes, yes, it has to be syntactic depending on the problem. It has to be something fast, a bound you reach or something. It has to be something fast. So here there is no, it's just something that says continue or not. Okay, it's not something that you compute. So this notation is a bit strange, but it's really is the unsat of my abstraction means unsat for my original problem. If it's the case, I answer unsat. So here do people use unsat core as a witness and try to check that without that witness? Yeah, so yes, we are going to use unsat cores on everything. Okay, so if you know what Ricard is in France, so this is sort of a joke with my PhD student. So this is a famous alcohol called Ricard, but with a high ear and well, okay. So actually we found the name funny, so we tried to make sure we had it. So typically the idea is you need to satisfy all those constraints to implement the framework. So the idea is at some point, if your over exception is satisfiable, then when you refine the exception, it's satisfiable. You preserve satisfiability here, and that's the reason why you can have the satchel cut. And at some point you will terminate. At some point you will, the refinement will be exactly the formula. Okay. And then you have the, so ear sat is a shortcut for your over abstraction and ear unsat is a shortcut for your over abstraction. And again, at some point, the recursive condition will not hold. Else you have a loop forever. Okay. Daniel. Yes. So can you explain that notation, equivalent, subscript, sat, question mark? So here it means that it's something that is easy to do. It's not logical equivalence. It means that there exists a way to check that typically you are done. Your abstraction is, you can no longer abstract. You retrieve something that is equivalent to the original problem. So it means each step of the refinement, the distance to the original problem typically is reduced. And at some point you are able to tell, okay, I'm done. So if I have two formulas, psi and phi with this equivalent sat question mark, what would that mean? It means that if the original formula is satisfiable, then I can, or in that case, unsatisfiable, they are equivalent. The unsat answer is unsat for the original one. If the refinement is unsat, you can conclude unsat for the original formula. So is it the same as equisatisfiability? No, because here, so equisatisfiability, you could do this for any refinement. Here it's a case where you decide you no longer need to refine because the answer you have, you know, because you do the loop for recur only because you get unsat and you do not know if the unsat means unsat for the original. But at some point you know. When you do bundle model checking, if you know that after 1,000 steps there is no bug, it's okay. You conclude. So this is typically what it means. So in both cases, there should be a way to stop the loop typically. This is what it means. And that is the reason why we use sat. Because we cannot use either equisatisfiability or logical equivalence because from the beginning, you do not know, but if you are sat or unsat, that will be the case of the final formula. But here it's really a sort of syntactical test. But it will be a bit clearer, I think, when we see the application. So this is the recursive condition. So typically what it means is we suppose that the over approximation is supposed to be easier to solve. So typically, but again, we applied that. Well, actually, we wanted to solve a particular problem. Typically all this comes from the fact that QBF, the general QBF can be solved using only sat solvers, two sat solvers. And we wanted to, so this is a p-space problem, and we wanted to solve another p-space problem using only sat solvers and we came out with that solution. So I'm going to show you the problem we worked on. So just to summarize, what is important is that framework is that you have two levels of abstraction. One is the oracle level for the sat solver, for instance. And then you have the domain level. Because you have this recursive call on the under approximation. And this is very important. And then the point is it doesn't really matter whether you are satisfiable or unsatisfiable. You have a shortcut for both. And there are also things that you can do and we will see that we take advantage of all the information we get, whatever the call we do. And this is in principle be applied for any kind of problem. So we worked on modal logic K. This is completely academic because one part of our lab is working on modal logic and so on. And so we wanted to have a joint PhD. We wanted to really push forward these practical solvers for that kind of logic. So it's p-space complete. So it's like QBF. You shouldn't be able to use a sat solver to solve them. And so the point is then, and that was the DM of the PhD of Valentin Omirai. He started his PhD. We told him you need to be able to solve modal logic using a sat solver. So you have three years. OK, do it. Yeah, but so this was really the point. And so what are we doing? We have the classical proportional logic and now we have box on diamonds. And so here it means that this formula is necessarily true, that it will be true in all achievable world and accessible world. And this means that there is, there exists a world in which that formula is true. OK, so we need to, for this to think about a Kripke structure in which we have a set of worlds. We have the structure of the world, the reachability relation here. And then for each world, we will have an assignment that will give you the value of all the elements of the propositional variables. And so what is important is that we have all the satisfaction relation which is based on the relation with the world. So a propositional variable is true in a world if this valuation is true in this world. It's false in the world if there is no world in which that formula is true. The conjunction is true if each member of the conjunction is true in the world. If one is true in the world and here the fact that you have all the, the fact that the world is accessible, then you can reach the formula. And I think I have something easier to understand what it means. So suppose the dots are formulas, OK? We have a first model logic formula here. It says, so we have a specific world in model logic K which is the world zero, which is just where we are now, OK? So this is the double zero. And here what we mean is necessary blue means there is a blue point in all reachable states. This is true, OK? So the first formula is true. Here it says in all reachable states there exists a state that goes to, that contains an orange dot. So we go here, this is true. We go here, it's not true. So the second formula is false. For the third one, there exists a world in which we have red and there is a reachable state where there is no green. So there exists a world where there is red and there is an accessible world where there is no green. This is true. We have here just a propositional logic. We, so here we have red, so we satisfy this clause. Well, or this disjunction at least. And here we have, there exists a world. So there is a world, there is an accessible world where there is an accessible world where orange and there is, in all reachable world there is no orange. So there is, so here it's not possible, OK? So it's here, here. But then all the reachable world it's not orange because it's orange, so it's false. And if I remove here the relation, the reflexive relation and W3, then this formula is satisfied, OK? So this is what is model logic, what are those formulas? And, OK, this is the kind of things we want to solve, OK? So we have, so what we designed is a solver called mosaic. So it's based on the SAT solver glucose and it's used, it implements the recur approach for the specific case of model logic. So typically we expect that glucose is sound complete and terminates. It might be a problem because, OK, it's a SAT solver, but the problem can be complicated. So now let's see what we can do for the over abstraction. So for the over abstraction, it's just like bounded model checking, like planning. We are using a bound, OK? And typically we are trying to express that something is reachable in N step, OK? So we have the classical things that we, the proportional variable says this is true in the world I, OK? And this is false in the world I. And then you have those formulas to translate the box and here the diamond. But this is not, the formula is not very important. The only thing is typically you have this N that says I'm doing for, when I can reach one world, a world in one step, in two steps and so on. So we have this bound. And so we are going to create the structure with just W0, W0 and W1, W0, W1, W2 and so on, OK? So we increase the number of world. And so now the issue is, do we know if there is a bound on the number of worlds? Because this is the only way we will be able to have this set question mark. If we have no bound, we won't be able to stop there. There is one, but it is typically quite large because it's defined in the number of atoms in the formula and the modal depth of the formula. So this is super huge, OK? So we have one, but typically we will not be able to reach it because it's so, OK? In the worst case, we know that there is one, but in practice we know that there is no, just for small formula, we will be able to reach it but else it will be out of reach. So now for the under abstraction. What can we do? OK, so if we look at all the problems we have in our case, we have cases where you have here something like a conjunction. Here you say there exists a world where we have P and then we have for all world, accessible world, there is not P. This is clearly not possible, OK? And suppose here you have a kappa which is a big formula, OK? That will make that bound that we have seen earlier very large, not accessible. So we would like to be able to spot this kind of formula and actually all the modal logic solvers have preprocessing steps where they will find this and tell no directly, OK? But we would like to find a way to retrieve that automatically with a set solver. How can we do that? I gave you some hints. So is it given by the assignment? I'm going to translate that into set, OK? Because this is the previous part. And I know that there are cases like this where the fact that the formula is unset is very limited, OK? Typically, if I can apply my set translation to this, I will be able to reach my bound because this is very small. So my bound will be, and I will be able to conclude unset. Yes, so this is what I want to do. I want that the under approximation is given to me by the solver. And so what I'm going to use is I'm going to use selectors. So I'm going to use assumptions and I'm going to... So typically I'm going to do that each time I have a conjunction, OK? I'm going to add those selectors. And I will assign, I will suppose they are satisfied, OK? And if the formula is unset, I will be able to retrieve which of those selectors have been involved in the fact that the formula is unset. And in our case, it will typically tell me, oh, I think it's related to these two guys. So it might give me more selectors than the real formula that caused the inconsistency, but it would be able to reduce the original formula to something smaller. And then this is what will happen, OK? So what we will do is we will add those selectors and that will allow us, in case the answer is unset, to see if typically if the set of selectors is smaller than the number of selectors, then it means that I can reduce my formula and I can try to see if I can solve a smaller one. So typically because the first bound is not achievable in practice, we try to find a way to reduce it so that we can conclude unsatisfiability. And so this is what we are going to feed. Oh, we are going to feed now our recursive call to Ricard. We are going to consider only the smaller version of our model logic formula and that one, the bound was achievable and then we will be able to answer with the other loop unset. So this is typically what it means, but what is important is just that you retrieve from the calls the fact that you have those selectors or not and so you can build that kind of formula. And so typically here what is important is it's really the call that you receive from the unset call, the unset answer that will tell you what you do. And here you drive really the search because of the unset call. Just to make sure, I understand. So when you do the call, the first call has to be done with the entire formula, right? S1, S2 and S3. Yes, it will be the huge one. And you assign them all to true. So that they are, do you have the normal formula? And then because it's unset, you will get the set of selectors that are involved with the unsatisfiability. Okay, and so this recursive condition, it's typically if you have a set of selectors that is smaller than the whole number of selectors you have in the formula, then you try to solve that. And you know, so you really expect the formula to be unset, right? That's the point. And so as soon as you cannot find a smaller formula, then you do not do any recursive call at some point. Okay, and then we have all those properties that are now available. And we have this thing, and this is a bit more comprehensible. So you start, you try to find a solution, you start with one world, okay? Can I satisfy this formula with one world? And then if it's that, yes, you have, and then you will have a Kripke model, okay? If it's not, then you check if you reach the theoretical of a bound, which is very large, so it will only work for some small formula. No, then you check what is the value of the core. So here, typically the point is you, so it's not completely correct because we already have added those information here, but we check. So is the unset core, the answer is unset, is the unset core global or not? If it's not global, then I go and I recall recursively with the simplified formula, my procedure, else I increase, and here if the solution is set, I can take the value of the Kripke model for the simplified formula, and I know that it's the max between that value and n plus 1. So I can, I do not have to increase to the next world if I know that I need, for instance, 10 worlds to satisfy the smaller formula. It means that I would need at least 10 worlds for solving the other formula. And then again, okay? So now, does it work in practice? So we have here a SIGAR procedure, so the original one, okay? And here we have the RECAR procedure. So typically, all the points here are in that part, means that RECAR is much better than SIGAR, for sure, because here we add a shortcut, okay? So this is typically the case, and we, you see, so 3CNF are not random 3SAT formulas, okay? They have nothing to do. It's the name official they have. So I don't know why they use that name, but they only have one variable, but a lot of connectives before reaching the things. So these are the logic work benchmarks, either SAT or NSAT. So you see we have, typically it's better than SIGAR. And if we, so those are all the solvers that used to exist, okay, in fact, Spartacus was the state of the art, and this is our approach, okay, on the benchmarks. So it's a bit better than the best one, but I think your solver, no, who, on the poster, there is something very similar to this, no, or is it on the poster of your student, where you see that it's very time consuming. Yes, it's on yours, right? So this is, and when you see that, you say, ah, that's bad, that's bad. Can we do something? Do you have any idea what's going on here? So remember that loop, okay? Do you have any idea what's going on? I can show you here something, so you have the two solvers here, and here you have the time to generate the file, to generate, because we are feeding a solver, you need to generate the CNF, and we have the solving time, okay? So the solving time is much reduced compared to the... So it takes more time to generate the instance than to solve it. What can we do? Incremental. Yeah, well, that would be something, yes? What can we do to fix this? Start with the formula story. Yes, it's a bit... So typically what happens is sometimes we will need to generate many, many, many, many formula until we reach a bound, because it's unsat, and then we will be able to decide unsat, right? So typically what we are going to do now is, okay, I think this formula will fit into memory. I will just generate it straight away at the upper bound, at the theoretical upper bound, and then I do not have to do all the steps, okay? And this is what it gives. Then we are much in a much better shape, okay? So this has a drawback. The strategy is, instead of... Suppose I look at my formula, it tells me, okay, you need to do... The bound is 1000. I will do 1, 2, 3, 4, 5 until 1000, meaning that if it's satisfiable, I know that I will reach... I will find a model, a Kripke model, with a smaller number of worlds, okay, by doing this. So now it means that I have to generate 1000 problem, okay? If I believe that I can generate the whole... Once, the whole formula was bound 1000, which can be quite large, okay? Some of the problems are out of memory on a computer with 256 gigabytes of memory. So those are PSPACE problems, right? So we just try to generate directly. If the formula is unsat, it doesn't change anything. If it's sat, the problems that we are not... We do not find a model with the smallest number of worlds. So it's a trade-off. So we lose the minimality of our Kripke model because we no longer do it step-by-step, but at least from... If we are just interested in solving our problems, and most of the time it's for unsat cases, then we have a behavior of the server that looks much, much better. So these are really kind of the two extremes. One is the linear search, and then other is really kind of starting very far ahead. Did you try something like galloping search, or initially one, two... Yeah, but then we tried many different things, but it's complicated because you lose... So typically here we wanted to fix that problem, okay? So the solution was... And so the analysis was really... We spent all our time generating files, and this is not good. So we tried to sort of look like binary search, I would say, where you would... But then the problem that you lose... The fact is typically we went to that solution because in most of the cases, we are not able to generate the CNF, okay, because it becomes too big. So you cannot do it directly. So we had that approach. So here this was the sort of best trade-off we found where we still have some properties on the Kripke model we found, okay? Because the problem is that... So you could do, for instance, if you have between one and 1,000, you could try, okay, let's try 500, okay? But if I solve 500, it's unsat, I cannot do anything. I have to generate another one at 1,000 to be able to use that answer. So if it's sat, it's fine because you generated a smaller formula and you have been able to solve it, but then what you lose is the fact that it might not be the smaller Kripke model that you have. So, okay. And so, yes. So what are the take-home message for this? So we tried to have a procedure that is reusable where you have that recursive call on the domain level that allows you to have this unsat shortcut. You could have the other way around also. It just depends on how you implement it. But... And it's really guided by the decision procedure on the sanctity of the unsat core and the assumptions. We applied it to model logik and it works well. The only issue here is you have to be a specialist in model logik. There was a colleague, an expert in model logik because you need to be able to have your encoding and to know what you're allowed to do, what you're not allowed to do, what are the bounds and so on. So you have to have a specialist of your domain to be able to implement it. And we had many, many, many, many tweaks to make it run in practice because even we tried during one year and a half, we had the idea, but in order to find all the things that are really... We implemented many things that used to work but were not good enough. At the end, we finally found this, but it took a lot of time and even in the code, you need... So everything is in NNF. So you also have to work on this normal form to be able to do all the encodings and we have a huge expertise in working with NNF in the lab. So yes, it means a lot of also work with the procedure to make it work. And that's for the end of the second lecture.