 Welcome everybody to the fourth installment already of the logic and philosophy seminar of this academic year. And we start off 2023 with Paweł Pawlowski from Ghent University, who is doing a post-op there. And we'll be talking about the relation between motor logic and non-deterministic semantics. First, let me thank you for being here and let me thank Peter for inviting me. It's an awesome opportunity to actually share some of my work with you. So the title is One Framework to Rule Them All, How to Overcome the Gunci's Theorem by using non-deterministic semantics for model logics. The idea is that this is more or less a summary of a couple of papers or a couple of drafts of papers that I have with quite a few people, with some people. So the credit for the work is also due to Daniel Skurt of Rury University of Bochum and to Elio La Rosa from MCMP Munich. So if I say something really stupid or false, then it's probably my part of the work. If it's something really brilliant, then it's probably due to co-authors. Okay, perfect. So first we start with some motivations and then what motivated me, why I'm doing this kind of work and why it might be interesting. And then for some reason it's, yeah, okay. And then we go through historical background so we'll see what has been done in the history of non-deterministic semantics as well as in just regular approach by using moda, finitely many variant logics to model logics. And then we see how to use non-deterministic semantics to do something interesting with moda logics. So the main aim is to develop an alternative semantics for moda logic. Next, to see how far we can actually push this alternative account. So which system we can interpret or capture which we can and why. Maybe that does not sound really philosophical, but in principle you can afterwards compare these semantics that we'll get, the non-deterministic one, with all the other semantics that are already available on the philosophical market, so to say. So it might be interesting because it's not even clear what you can get and what you can't and why. So maybe it is more or less the same as Kripke's possible word semantics. Maybe it is more in the spirit of neighborhood semantics and maybe it's just incomparable with neither of them. Another thing that you can think of is that you can... Well, there are some issues, some philosophical issues with possible word semantics. One is that the whole machinery of the semantics seems not to be philosophically innocent. There is quite a few metaphysical things that needs to be explained. What are these possible words? How we can access them? Are they just absurd entities? What is happening? And of course from a mathematical perspective, usually mathematicians slash computer scientists do not care about this problem, but since we are philosophers sometimes we do. So it would be one of the potential points where you can philosophically argue that the semantics or the reason that I'm doing might be interesting. Even for some people who do not believe in the project of non-deterministic semantics in the first place. So even if I don't believe in possible word semantics then I have some alternative accounts. The second thing is that if you look at the possible word semantics then the evaluation function is always relative to a word where you start with, or to a given word. So it is very local. Whereas usually in many value logics valuations work slightly different. They just assign values to formulas uniformly in a way. There is nothing relative to that. This formula under this valuation has this particular value. There is no additional component that relativizes that. And then if one needs to do a bit of mental gymnastics how to get those with possible word semantics. Either you need to introduce the set of non-normal words or impossible words or you can do something with the accessibility relation and then switch to neighborhood semantics. And then even if you initially believe in the possible words and you are quite okay with the semantical apparatus and the ontological commitments then as soon as you go crazy with the notion of possible words and introduce impossible words or non-normal words then it's really a lot of explaining to do in order to make it kind of coherent from the philosophical perspective. And then we are back in this period of 1930-1960. So that's the so-called syntactical period. So the only thing that we are doing with monologics is we just introduce some axioms and then we do algebraic semantics on top of that. That's what we do. So there is no clear relation between all. First, it's not even clear which monologics are interesting or which are crucial, which are really not. That's the first thing. The second thing is that there is nothing else except axiomatic or algebraic approach. And third, people back then didn't actually associate possible modality or monologic with this clear link that we have now with the possible words. For that, there was no possible semantics. So they were still thinking about modality as some kind of a syntactical operator that's supposed to do stuff and then how to capture it. So it's interesting perspective because it's now when we think about modality, we immediately go cryptic structure, but it was not so clear back then. So we need a bit of terminology. So we start with the propositional language, L. And then of course we have some set of propositional variables and then there we have set of connectives, which I pronounce German, but I have no idea what's the proper English pronunciation of that. But then we have J, so it's a J. German cross in our language, and K is the r-r-t of the connective. And then we have the notion of logical matrix, right? So what's that? That's a triple. So we have a nonempty set of values. Then we have a nonempty set of designated values. The idea is that the designated value is supposed to work as value one. These are the values that are preserved in the notion of topology and in the notion of a consequence relation. And then we have the set of set O that's supposed to interpret the connectives of the language and provide meaning for them. You can think about this in terms of truth tables. In the classical propositional logic, these are going to be just truth tables for negation, conjunction, disjunction and whatnot. In our case, it's going to be a set of functions that interpret those guys. So it is going to be just a set of functions from the competition power to the end of the set of values to the set of values. And then, of course, we have the notion of valuation just to behave as usual, right? It just works that it takes the interpretation of the formula and just assign the values according to this schema. So there is nothing shaded. So the valuation is basically assigning some values to atoms or to propositional variables. And then if a formula is complex, the valuation just behaves according to the truth tables or the functions that interpret the connectives of the language. And then we have consequence relation, sorry, the notion of topology. Basically, it's just preservation of the designated values. And then we have the notion of consequence relation, which is defined in the usual way, right? So we just assume that we just say that if all the premises are designated, so is our conclusion. And this is where we equate many value logic with this, right? So for the rest of the talk, or for the mind of the talk, by a many value logic, I mean a logic for which there is a logical matrix that has finite things and values that is strongly complete with respect to a given logic, meaning that it describes exactly the consequence relation of that given logic. Now, we are at the historical background, so gathering the fellowship of the written. So back in the 60s, Kasevich has this interesting approach. So what he thought like, already I invented three value logics, right? So I was trying to solve this problem with the battle of tomorrow in the Aristotelian thing. But then, yeah, okay. So there is this notion of possibility. Can I do better? And actually what he advised here are four values. And first, maybe not first, but one of the first semantic approach to to model logic that was not algebraic. Well, it is kind of algebraic, but not necessarily straightly algebraic, right? So the idea is that we have four values, tff, the designated value is capital T. And then the negation just flips capital T to small f, small t to capital f, small f to capital T, f to t. So the box operation works as follows. So for those two guys, it needs your designated value for the rest non-designated value and I will give you the capital T. So the implication works more or less as expected. So for the classical kind of, so for the monosphemous thing, it works as it's supposed to. So if we have here the t or t and here f or f gives us f and for the rest it gives either capital or small t. The problem with this semantic is that it's really hard to say what are those truth values. Because my initial idea was that well, they're supposed to correspond maybe to this necessary truth, possibly true, possibly false, necessary false. But if we look at the tables, they do not. So, yeah, and I read the Mokoshevich paper from 53 to find the inspiration what he meant. He was also under-specific. He just said, no, no, it's just a Cartesian product of these two algebras. Oh, thank you. Well, so, yeah. So we do have some kind of semantics. We don't know the meaning of the truth values, but we do have some properties of the logic, right? So the first thing that is quite clear that this logic does not have the rule of necessitation, right? Why? Well, because the only designated value is capital T. So it will not, the rule of necessitation will not work. Actually, it doesn't have any topologies of the form box of something. Then, what we do have is the model axiom 4, which is kind of okay. That's a sensible axiom of the logic. We also have the reversal. Okay, so, so far good. This is also something that can be valid in the semantics. No problems. There are no philosophical issues with this axiom. We have this as well. It's a kind of closure on the modus bonnet within the scope of the box. Okay? Sounds sensible, right? It's also probably in most of the monologics. But then we have this, and it's like, ah, we don't need that. Right? So if something is necessary, then there is a sentence that is equivalent to stating it's own necessity. It's not provably in most monologics, right? We also have this. Oh, really? You say, ah, there is a sentence, but you mean for any sentence? Yeah, for a foreign, yes. Thank you, yes. For any sentence. Because there is, it seems quite... Yes, yes, yes. It's viable. Also, if there are any questions, or if I say something that is stupid, or is actually stupid, just feel free to interrupt me. Yes, yes, for any sentence. Yes, exactly. Thank you. Then we have this as well, and then we have this as well. So it's clear that you cannot find, from today's perspective, a decent, creepy frame for that, or actually a decent, even generalization of creepy semantics for this logic. And people label this logic as garbage, and basically it's like, ah, okay, we tried many logic approaches, it does not work. Let's move to something else. Oh, sorry. There is a conditional one. Yeah, and this is what we already told you. So then the whole paradigm of doing moda-logics with many values, as many bi-logics, just kind of took strong heat by these problems. And then in the 40s, Dugunji's proved the following theorem. He was thinking about, okay, so I will change the order of work. Instead of devising a semantical account of a new moda-logic, I will take the moda-logic that I currently studied, because they're the Lewis systems, S1, S2, S3, S4, and S5. And then I'll just try to find semantics for those. And then what Dugunji's proved by reinterpreting Gada's theorem, sorry, Gada's theorem about similar thing, or the same thing for intuitionistic logic, is that if you restrict the number of values to finite, then you cannot find a logical matrix for neither of these systems. Which was like a very strong heat for the whole approach of using moda-logics for moda-logics. That was basically like, you can't. Just a small question. Was there an alternative available at that point? Algebraics. Algebraics. That was the debate, and people love it. And now it's almost, almost no one is working there, but yeah. But the people, yeah, they were doing these algebras and yeah. I don't know much about algebras, but I know that it was the predominant things. Either you studied them axiomatically, or you were just doing something algebraic, but those were really kind of like, syntactically close together. And then people who tried to do stuff from that who were motivated like Lukasiewicz buying by semantic investigations had to do stuff, right? No. So that's the picture so far. But it's even worse actually, right? Because you can generalize the Degutukunji's theory for the following. So we fix a language. It's propositional moda-logic. Sorry, propositional moda-language with two moda-operators that's supposed to be interdependent. And then we say that we have moda-logic H, and it's axiomatized by propositional axioms, by the rule of modus pollens, and that's it. So actually we don't even need the diamond here. Sorry. Yes. So this is the system H, sorry. And then system H does not have many semantics. What is even worse, no system between H and S5 has it. Which and if you look at the system H it's just moda-logic. Sorry, it's just propositional logic in the extended language. So it's really a weak system, right? So then it would mean that modality and many-valuedness does not come together. And now we are kind of like in this part where I'm showing you some limbs of hope. Put it point of play. So that's non-deterministic semantics to the rescue, bringing the world into more. A bit of history. So first idea of something that was resembling non-deterministic semantics was by this super famous logician that I just learned recently of his existence, Otto Karzyk, known to some as the father of Czech logic. So that's a very important figure in history of analytics philosophy of logic for Czech people. And then he had this paper that he was saying something about using complex values. So that was something like multi-dimensional values. But he didn't come up with non-deterministic semantics or anything like that. He just said that you can kind of interpret the values as stuples. That was roughly the idea. We have nothing for a long time and then there is Russia with this paper in 62 where he's doing stuff like well, so we have this conditional in propositional logic but it does not work well with natural language. Maybe we can use something like non-deterministic semantics or like undetermined values to study the implication of the natural language. And then his conclusion was and now we can do that. It's just trivially just collapsed to classical many violent stuff. And the funny thing is that he was wrong because it does not. But he didn't come up with it. It was a remark in the paper. And then we have almost nothing till the 73 by those are the papers that he left when he's... So those papers, the one from 73 is only available in Russia. And the other one, this one is in English. And his idea is was more or less to use non-deterministic semantics and actually like more or less proper non-deterministic semantics so with like many values and everything for modular logic that are weaker than modular logic, okay? So he was studying like modular logic without the rule of necessitation. Whereas Kerms did something else he said, I don't like possible words, right? So I'm going to develop an alternative account that works, right? And then he just provides a way to use non-deterministic semantics to characterize t as 4 and as 5. And then nothing completely, right? These three papers, one of them is in Russia two of them are in English. And then suddenly shows up Avron, who is taking this non-deterministic paradigm of doing stuff and then formalizing it like introducing the appropriate notions doing meta-theory. And then he's saying awesome, let's do it for proof view. And then there are plenty of results in this like label, sequence calculus and whatnot. And then his next aim is like okay, let's do some paraconsistence stuff with it. And there are also an interesting couple of results about these paraconsistence systems and how you can some of them can be captured in by using non-deterministic semantics and some of them can't and there's a lot of interaction and a lot of interesting results that have been already achieved. But nothing specifically about monologic. And then these papers are actually or have been recently rediscovered in 2016 by Marcello Cogninio's group and Hitoshi Omori with what is funny that they were rediscovered independently. So it was the right side of the guy stuff apparently for these kind of things because people kind of at the same time in different places in Bochum in Pundam Brazil in Pundam but Campinas in Brazil they work on that independently. And so as people see the rest is history, right? So recently like in the last five years I think that number of papers on monologic and non-deterministic semantics tripled or what are called there are plenty of people who are playing for logic standards maybe five or seven so it's like almost a wood store for logic standards. So it's like, yeah, so we are going strong, right? So this is not determinist so what's the semantics about? So instead of a logical matrix we have something that we call n matrix sometimes n without the romanes and it's also a triple so we have a set of values a non-empty set of designated values so more or less it works as in the previous case, right? As in the case of logical matrix and this is the important difference so what all does is instead of giving an interpretation that takes the r-e-t of the connective and gives you a single value it gives you a set of values, right? So for instance you can think of this junction and then suddenly if p is 1 q is 1 then this junction can be 0 or 1, right? So it gives you a set and by doing that actually you can express a lot of interesting things, right? And this is captured by doing this for now, right? So it's just a function from the incarnation power of the set of variables to the power set of the set of values by these values, right? Maybe useful for the group just some people to explain what designated means. Oh yeah, so as I think I already told you but just to recap designated means that they behave as 1 in propositional logic meaning that those are the values that's supposed to be preserved in being a topology or in that consequence relation. And then what is the notion, what is the valuation and so now valuation works slightly different so basically what it does is just simply takes one value for each of these sets so it's specifies exactly which of these values you can have, which gives you a which if you start with two variables p e q suppose both of them have value 1 and then you have this junction that gives you 0 or 1 right? Then in propositional logic you would have this information would be enough to infer that this junction is already 1 right? But in non-deterministic semantics you have the interpretation of this connection gives you the set and then valuation you can have valuation that works like this and then this valuation splits into two, right? You have v1 where the disjunction is 0 and then you have v2 where the disjunction is 1 right? Which means that the assignment so the assigning values to propositional parameters does not uniquely determine the values of all the complex formulas of the, of the energy right? But it still narrows the set of possible valuations now and the notion of topologies as already has been explained okay so we start with the language that has only box and now the idea is that the values are going to be two dimensions right? On the first dimension give us information about the whether a formula is true or false and the second dimension is going to be giving us the information whether a formula is necessary or whether it's not necessary and we will use the same names for the viruses in the Pescewicz case, right? But now clearly we have the philosophical concept of what, of the meaning of the values, right? Capitality is true and necessary small t is true but not necessary, small f is not true and not necessary, capital F is not true and necessary and then we can characterize the logic H and this is the matrix for this logic so we have four values two, two of them are designated in the case of Pescewicz logic only the capital T was designated and this is the interpretation so basically nothing is topology for the box and this is the conditional and the conditional works more or less as some kind of algebraic product, a product of classical propositional logic with classical propositional logic so we have two values that behave as one and two values that behave as zero and then this is just to ensure that all the classical topologies are still valid we can compare it with Pescewicz logic it's indirect oh sorry I should say this so this means that the interpretation of the connective assigns a set of two values, right? so it's f and small i and then the valuation singles out one of them so we can compare it with Pescewicz so here is just f instead of the set here is a difference for some weird reason Pescewicz had f here it works like that and then the implication more or less the same but in our case in the case of non-technical semantics we have more than one value but usually the value specified by the Pescewicz logic is one of the values specified by the interpretation of the connective in this table and we can strengthen these semantics right we can actually add axon t we can add axon k we can add axon form to that to that thing so to get t what we need to do is we need to make sure that the value for capital t is the only one that is designated right so we are changing the meaning of this value actually that's what's happening right so we are just removing the possibility of some false formulas being necessary we just say if something is necessary it has to be true right because this is what the axon is saying for axon form on the other hand what we do is that we say that if a value is capital meaning that it's supposed to be necessary because that's what capital stands for the necessary values or the ones that present necessary you need to give the capital value if you do that then it means that if you have value that starts with the box the box of it is also going to be designated and for axon k what you need to do is you need to change the interpretation for these values for the cases where you have capital letter and small letter going for small letters for t and f so if you do that you can probably by just simply checking all the valuations but it's a lot of work I did it twice and I don't want to do it for the third time a couple of hours and there are a couple of hundreds of valuations and none of them are kind of easy you can't predict what is happening or not always at least but okay so now what we did was we started with a language that has only box but we would like to have diamond as well what do we do well one way of incorporating diamond is simply by adding it to the language and then just adding another dimension for the values so now we have eight values right depending whether we you can view it as such a paper right here we have zero or one here we have zero or one and this is true or false this is necessary or not and this is possible or not which gives us eight conceptually it gives us eight options and this is what we can do so we have capital T with diamond the lower index diamond says that the value is supposed to represent that the possibility of the formula we have capital T which says for true necessary and not possible we have lowercase t with diamond which is true not necessary we have t zero zero which is true not necessary not possible capital F not true necessary possible because there is diamond the one without diamond so not true necessary not possible and small with the diamond not true not necessary possible and the really, really negative value not true not possible not necessary right that's a lot of values especially from deterministic stuff and then we have those guys are designated kind of obviously because they preserve truth and designations about truth preservation ok first problem we have two model operators how can we be sure that those operators actually denote possibility and necessity in kind of one dimension right because in principle we can use two symbols for just two model operators that are not related at all and there is nothing in our semantics that kind of relates those operators but it's very easy to make to kind of incorporate that in our semantics so we start with splitting four duality principles so the definitive principles of box and diamond into four implications and then we can just well, sorry the fifth one is have all of them together to make sure that they are developed and then we can just use the following thing capital D with this over line this is we use this for the set of all non-designated values but in this semantics this is simply the set f diamond f small f and small f diamond and the capital D is going to be the set of all designated values so simply what we do here is in order to make sure that the first axiom holds we and this is an interesting observation because we do not tweak the interpretation of modalities because this is given by the meaning of the fruit values but we tweak the interpretation of negation to get to those because there are negations involved so actually what is interesting here is the kind of fault of negation that your logic do not think about this formula as equivalent so to get the first axiom we need to tweak the T diamond and F diamond to those sets and T and F to those to this case and to get to only to D2 we need to go here for those values to have D3 I mean there is no principle reason probably there is some reason but it's not that interesting I actually just calculated the possibilities and make sure that it works there is not that much of philosophical intuition here it's just brute force engineering and then we have the fourth axiom even by those and then in order to get all of them we just simply take the intersections at each level here we are left out with F here we are left out with capital F diamond F capital F diamond T capital T small case T diamond and capital D diamond and of course if you want to have just subset of them just take into a section point wise and it's wanted that you would always have a non-empty set of values otherwise you would just end up with not with the logic and we can do even better we can on top of that so everything so far is modular that's an interesting observation as well so you can add any of these axioms for inter-definability and then you can go for the implication that satisfies K or not and then it gives you a very big range of logics so you can also do the axiom four so we've seen how to do it in four-valued case but we can do it also in the eight-valued case we can actually even go for the axioms that involve diamonds like axioms five sorry I had difficulty to interpret the axiom four yeah I think there is oh yes that's a very something weird that's a very good lift exactly that was the point so actually that was like not sanity check but like you can listen to no actually you made a typo so this is supposed to be this so the implication is just a thing for implication okay that's a typo but thanks and then we have axiom five and axiom right? okay so probably yeah these are the three that we start with well known axioms and then we can add any of them or any complication of them right and it's interestingly if we have if you want to add axiom four we only need to treat the interpretation of box for capital letters right we need to ensure that the capital letters give as interpretation capital letters right because if the letter is capital that means that we have box but if it gives you as interpretation capital letter that means that the box for black has also box box box for axiom five you need to do a bit of picking of the modality of the diamond and the same for B and here again everything is modular right you can take any of them with any combinations of the axioms and with K or without K now the point being the logic that we studied already presented so far do not have the rule of necessitation and in principle it's not easy to get it back there is no direct way of getting it back actually you can even extend the Dugundi's theory to prove that there is no non deterministic semantics for this system but what you can do is that you can do or introduce some kind of of a valuation restriction or filtration procedure so that the remaining valuation so you start with all the possible valuations right and then you apply the filtration procedure and then what you are left with are the valuations that do work decently with necessitation and one way of the the only way that is non at the moment is the way of Mth level valuations so what do we do we start with a valuation in some kind of N matrix in some kind of L so we have logic that is induced by Mth matrix and everything is done in the modal context then we have this thing SP called super designated values the idea is that those are the values that are designated and they are capital in our notation so we preserve the meaning of box I preserve both truth of the formula and the necessitation of the formula or the necessary and less of the formula and then we say that the valuation is of 0 level if it is just with respect to the set of of super valuations if it is just valuation in the matrix right and it's M plus 1 level if and only if for any Mth level valuation if we assign a super designated value to a formula sorry if we assign a super designated value to every formula V that has a designated value for any previous level then it's a oh no it snows on it it snows it snows it's very intrusive our system ok so so that's our recursive definition so you start with all valuations and then you say hey look if you look at the all valuations this formula is of tautology then it has to have the super designated value but there are valuations according to which it has just designated value remove them so that's the idea right and then you are at first level and at first level you have new tautologies but again they can still have just they can still be assigned just designated values not super designated values remove them again and then unfortunately for most logics you need to repeat this process at infinity and then you just take the the valuations that satisfy it for the whole thing so you just take the intersection for those valuations and what is interesting for the values capital T diamond and capital D you just get the rule of hesitation back for any combinations of the logic that we mentioned so basically you can go and this is awesome because these are some of these logics are actually the logic that you cannot capture by using quickly semantics for instance logic H with necessitation there are no model tautologies sorry for axioms just with the rule of necessitation surface, no quickly semantics all the logics that do not have axiom k the same thing so it seems to be slightly more powerful than quickly semantics but on the other hand there are certain things that you can do with quickly semantics you can make sure that you have these really complex axioms doing something with something whereas it's not so obvious in non deterministic semantics because I mean quickly semantics from that perspective has infinitely many options and here we are still kind of contained within the eight values there is only finite many systems actually that you can describe which ones we can actually describe here, I'm glad that you asked so now we are proceeding to showing what we can do with these semantics with respect to box we need the notion of simple refinement which basically says that if you have two n matrix that are based on the same language and everything keeps fixed and two is a simple refinement of m1 if and only if they have the same values they have the same designated values and the interpretation of connectives is just a kind of restriction of the interpretations of the second one so you can think it of a strengthening of not under specification but a precisification of one of the n matrix and then we introduce the notion of simple box refinement this is the specification but only with respect to the interpretation of the box and then the rest is kept constant and we start with the eight values so we have this eight value framework plenty of values a lot of them and that side note there is a paper by Hitoshi Omori in Daniel's school where they employed 16 value n matrix for some kind of modalities that are even more tricky with respect to certain reversing rule of necessitation for instance with instead of box going for not diamond, not fire but eight is my sweet spot I come to 16 and these are the designated values so it's more or less familiar as in previous slides and then of course we have the set all that integrates the connective of the language and then we we take the rest of the usual connectives like conjunction, disjunction equivalence to be defiable in the usual or non-cremative in the usual way there is a bit of work to be done to kind of make sure that everything works but it's more or less tedious and not philosophically important to kind of make sure that the definitions do work and then we have this so we have stuff for for negation so we now assume that we have the D1 to D4 axioms so the diamond and box are inter definable this is the starting point and this is the starting implication this implication satisfies K and that's the weakest logic that's our starting point so that's the interpretation of the remaining connective that we keep constant this we will keep constant and this is where we are going to play around so we'll check what would happen if we change one of those entries and we actually study almost all possible combinations of these entries a couple of companies now there's a logic here this is what I more or less already said so we are only interested that's also an important thing so for the designated values that are capital so the ones that present the necessitation or the necessariness of the formula we only consider those refinements that still assigns the capital letters to that formula and this is because if we do not do that the procedure of the m-level valuation does not work actually it's not known but I was not able to make it work to put it that way so far and then what is interesting is that actually certain choices certain things that you put here like instead for instance of all the values you put here capital T so some of these choices do not strengthen the logic and then of course the question is ok so what is the strongest logic what is the strongest semantics that you can postulate so that the resulting logic remains the same and the answer is there is no unique solution to that problem there are some of the maximally strong logic so maximally strong refinements that gives you the same logic and now what we are going to do is we are going first I will show you which of these maximal strengthening are our starting points and then we will go row by row playing with the interpretations of the box so first let's ABCD V in the following sense so A is either capital T the diamond small t or capital T T diamond and E F G H small f and small f diamond and then if you consider the set of these strengthening and each of them does not result in a stronger logic so these are the maximal points so if something assigns three values for instance then it also does not change the logic so there are plenty of options to go forward just please and here are the points that if you strengthen you get something more and here we go so first we start with the actions A, A, B, C, D, E, F, G and so on we denoted the row that we are working with right oh not so no so actually yes yes and then the lower index denotes the option so if we have five as values capital T diamond maybe this or this or this or this or this there are five options to go forward and then each of these move and give you the following gaps it's a lot of completeness and if you go for B so interestingly we only have five options for each of the capital values and then we have seven options for each of this lowercase value that's because we are removing the options when the capital letter goes to lowercase letter otherwise we are not sure whether the development situations can be regained with this framework so we go for C and actually the actions are more or less straightforward if you look at the meaning of the values right so lowercase T diamond says the formula is true it's not necessary check and it's possible check and then the interpretation of books of this formula is this the box of the formula is supposed to be false and false here sure but we already have it here so we are not putting it here and then it can be either so those two values differ when it comes to the status of box-box it says box-box is present this says it's not so we put diamond-box here and that's for all of them and for the lowercase letters they are slightly longer and then we go through all of them and of course everything is super modular so you can take any combinations of anything and it's always non-empty set it always gives you the non-empty set it always results in a proper matrix I'm already okay zooming in on the axis with index 1 so if you look at the axis with index 1 and if we all if we add all of them then what we get is that each of these particular axes can describe one situation a combination of necessary, true, possible or not possible, not necessary and false and it basically says that in this particular scenario diamond-box is fine so if we add all of them then we just simply these points down to adding box-diamond-5 and then we can add any combinations of them and just simply calculate what is happening with just propositional logic or if we are lazy we can just go to this website which is an implementation of kind of like a sound quiz theorem slash cryptic conditions finder and then we can postulate any combinations of other axioms and actually it gives you the corresponding cryptic frame condition that you need for the logic for the model logic for the possible word semantics to have and of course it only works for the logic that do validate K so we can just simply write down any axiom that has been presented and it should work and give you the answer and also it works for the combinations of the axiom, a sweep teach and go for the conjunction what is also interesting that for some of this logic the hierarchy of the valuations of the m-level has a slightly simpler structure namely if we start with logic of validates axiom 4 then the resulting logic only needs one level of valuations to regain the real organization so if you are in the first level it's already giving you the closure of the loop necessitation what is also interesting about the hierarchy itself is the following let me first start with the intuition why you need to embed the levels right so if we start with p then we then this is a topology so the interpretation will always give us the set of all designated values right here and then if we consider the box of it the box seems to remove the lowercase letter from it so that would correspond to when for the first level valuations so here we will have just capital T and T but then yes, yes, yes and then here we just get the set of all the possible designated sets but then if you add another box here again you can get just go for the lowercase letter and you can get falsity of this action so then you need to go for the second level and on the second level this is becoming a topology so then you are removing from this set valuations and you only have these ones so then this becomes a topology and then if you have another box and then since you can add infinitely many boxes the other thing that is interesting but less less positive so to say or actually quite negative I'm just writing down, yeah, is the following so if we start with such a thing from this set that's such formula and we take the following valuation so here is T diamond this goes for F this goes for T diamond this goes for F this goes for T so this box F the implication goes for F here we go for T T T T alright? it seems that we can falsify this part so it's not going to be a topology but this is actually something that is provable in a topology as far so even if we start with non deterministic semantic purpose form this seems to be a counter example to the completeness theorem because this is what we found a valuation that works and it seems that there is nowhere here something that will be kind of filtrated out but the problem is that you the non deterministic semantics with 11 valuations lacks the analysis so if you start with partial valuation meaning you assign values only to sub formulas then you may make a mistake and do not actually see that some other formulas that are not even sub formulas of these formulas may influence the valuation that you are considering or the partial value that you are considering and actually if you filter out to the action K in this case it's a point here from the hands of the valuation we remove some of the valuations in that formula and then it will turn out to be a topology actually and this is something really tricky so it does not give you a decent method of verifying whether something is a topology or not and now people are working on that and yeah it's not easy there is a paper where they actually show some kind of procedure for getting the analysis in the case of T in this form but so far it's not clear what is the underlying principle they just buy brute force sets so we can actually take these sets and this set of formulas and this set of formulas and then if the valuation is close under those then it's good enough to give us the answer but it's not clear what is the underlying principle here and of course what is the relation between these semantics these semantics are all open questions and there is some people working on that but we'll see how it goes and back to the slides so it works for the for the axon 4 axon 4 limits the character to one level only but for the rest of the axons the answer are unknown there was a hypothesis that what is actually the signing for what is actually crucial for having the character finite is the number of non-equivalent modalities that are definable in a given logic but it turned out to be false so there is some other property that can make the character finite but it's not clear which one thank you shall we just take 5 minutes short break and then start so 4 is up for questions so for once it's a real one because usually when you begin I have a naive question it's not naive at all but the value refinements techniques so you have to remove everything I didn't understand why how you're doing that except by brute force like you showed what is the motivation to get that is that your semantic is too rich and you have to get rid of stuff so why not just take it at the beginning yeah yeah I think I can explain that we can try at least so the problem is that we have two designated values so if we have this formula then the output for the implication is good and here we can have capital T D and in our definition of topology we are interested in truth preservation so all of these values they present truths when they're good to go perfect but then if we had box then the situation is slightly different because box says that that formula must be necessary but some of the values here say now it's not going to be so here we will get some non-designated values so if we do not do the refinement we cannot distinguish in the object level or in the object language those two cases we can simply if we go for truth preservation then nothing is going to be necessary because well because we have these two types of values so nothing of the form box something is going to be formula we can fix that by brute force but then the lower works but then it will just collapse to be true and this is half of the Bukosiewicz problem actually with the logic because he couldn't make this distinction in the first place so this is why in his system we also do not have topologies we have only one designated value and by this he had this really real topologies true sorry value whereas here we can make this distinction but the price to pay is that we don't have the topologies of this hence we need the strengthening and the strengthening works more or less on it intuitively it basically says that if I have a formula that is a topology according to your semantics you can think of it as a children learning model logic or a child child is learning model logic and the child arrived at the thing that if B then B is a topology right and then since he learned this new information it's a topology then let's say that he believes that topologies are necessary so then what he wants to do because this is from Kripke's point or from Kripke's semantics point of view what is happening topologies are necessary oh this is a topology so I need to remove those things because sorry topologies are necessary so I need to remove those things because they do not see that the topologies are necessary and then the children can say oh this guy is a topology now but then if the child comes from the second box again you can have similar problems so let the children now know that this is a topology so then at this level he needs to remove those values and then this is going to be topology but unfortunately if you are really stubborn or the kid is really smart you can go or she can go for like any many boxes this is why you need the whole hierarchy it's a bit like with Tarski approach to language right if there is this Tarski axiomatic theory of truth and then you can also go for or even better this approach to truth at the beginning in the extension you just put things that are true automatically and then at the second level you put the truth of the previous level and so on but in that set you need to go slightly higher than omega meaning you need to go for slightly bigger infinity because the trick is in the first sort of languages at omega you need to find new predicates on new things and you need to account for those as well as you need to go to omega ck one level do you not see that as a weakness I don't if I have to get rid of semantic of possible world to do that we can discuss much more in a good way totally agree this is a certain thing that you cannot do but I totally agree I'm not saying that this is better I'm saying that this is different I just want to point out that there is inner interest in having this transfinit integration in Tarski maybe you can use to point transfinit demonstration to have a completion is there because maybe it's not a weakness maybe you can use actually it does it does come like a slightly weird color that if you restrict your attention to a particular level then what you are actually getting is this really weird form of root of necessitation meaning that if something is probable on the health level then you can add box in your logic so at each step you are closing your internal logic of box under the things that are topology in the previous level and this gives you like a infinite version of stronger and stronger rules of necessitation that finally merge into the true root of necessitation doesn't allow you to avoid good exterior? no but it gives you some interesting things because there are some of the logic that do that actually like there is this model of GMS which is I think it's GLS and that's not logic right so the aim is that box is probability and then they can add reflection schema meaning this thing but then what they do they add this but they weaken the rule of necessitation only to GL and this is more or less what we can here represent but unfortunately we cannot represent all it's not clear how to get GL in this setting I tried I failed yeah that's pretty nice yeah well that would be awesome because I tried but so far it's and now I think you cannot so I'm trying to prove that you cannot but it's still relatively in new field but coming down is of course on one hand weakness it's like oh great instead of infinity many words now I have infinity many filtration of the valuations a great job I say yeah I agree it's not that we are presenting something better but something different and probably there is actually a very interesting relation between what is happening at these steps and and possibly words because you gotta think of this you can think of these things in terms of possible words because actually those valuations are going to be maximum consistent subsets right so you can just simply go for infinity and that valuations represented as possible words and then some relation between them what does exactly the end level valuation technique do how you should write down the Arabs we don't know but that would be great or it is for more thanks a lot I think good enough you told us how would you like to pursue the metaphysical consequences of your logic and how possible words you said it's still going I would like to know maybe if you motivate that with the application your inspiration of valuation and non-determinative parts would you be very close to what some people have proposed in terms of quantum mechanics in terms of quantum mechanics where you have the probability still that this is a technological thing when you say generally your valuation is another step and usually the possible world approach to this model of ideas can be used to support the minimal limitations of mechanics so if you can do another kind of logic with your stuff which is different from possible worlds maybe it would seem what kind of metaphysical concept would it be yes I know almost nothing about quantum mechanics to be fair but I do know that there is a paper where they build non-deterministic semantics not model logic but non-deterministic semantics for quantum processes for some of them which I think has been already explored it's been published in a journal I believe a physics journal isn't the island half the Brazilian magician no I think it was done by a physicist actually because I don't know the name so I presume it must be but what I am now not remembering the name but I can check it afterwards so that has been done what are the metaphysical implications of this thing so I don't know yet ideally what I wanted to accomplish so the aim that I am starting with let's say the angle that I am coming from is slightly more like an engineer than a philosopher so there is a problem I have these tools can I fix it or can I build a machine that does the same but is different and then after building the framework I can think okay so I build this machine what can I do with this machine and what are the repercussions with computers they had this theoretical model of computers to build computers and then they thought about philosophical aspects not the other way around but now the semantics are already there but to be fair so one thing that you could do is that you could think of this from like epistemic perspective as agents not having enough information to judge certain connectives either way they only have this information that norms the interpretation down but not sufficiently to kind of push this idea maybe to kind of explain the whole semantics in these terms I guess actually that would make a bit of sense but what you need to show is that I could support other kind of interpretation but these are I have no idea this is a good idea thank you I am still not I am still about the comparison to possible worlds so you said that one of the problem at least for you is the world's relative evaluation so here it's not relative however however some of them are bad so you have to go to get the evaluation and maybe never it's matter of taste but it's not relative it's not relative and that was kind of tricking me up because I like my evaluations to be functions of certain type even if they need infinite number of steps together yes that's fine it's coming from this Polish tradition of doing logic like for a Polish person model logic is non-classic logic so as soon as you have kind of set theoretical distance then you can do algebraic stuff which is our friend that's fine but as long as you are checking doing something like a relative that's computer science for the traditional tradition that's not clear logic but I know that in Belgium it's the opposite model logic is considered still classical logic and really weird logic is considered philosophical non-classical but for a Polish person if Tarski did it then it's probably classical logic other questions? I have a side question it's really not related to the subject no no just really you know when you introduce modernities you are like oh we have three values necessity, possibility and truth so I just at that point I was wondering if it can be necessary with that possibility and you have it in your framework and if it's but that's what we exactly do here right a lot of clicking getting there I looked at the actual model I wasn't sure if the actual model needs this is it not necessarily possible this is exactly what does and yeah you can go crazy with that you can like how interesting it is for if you try to modelize natural language it seems very interesting because you can have kind of a moral beauty if you are very necessary but not possible but I was wondering in your framework yeah so this only is up to 8 right but then as I said Hitoshi Omori Haxi is 16 8 is already enough it's like you can add double negations in the scope of modality so it's like because from this perspective this is hyper intentional that's also an interesting point of view and actually part of the topic of can I note yes of the project that I'm applying actually here with Peter it's so these projects are hyper intentional as well but the link has not been neither set up or elaborated or investigated actually so in principle p and not not p are different and then you can just say okay so this is the behavior of p but you can also look at the behavior of not p and then you can look at the behavior of not p not not p yeah sure you talk you can say which ones are not surprise I would expect that but no I'm actually they seem to be different to be necessary but not possible for me something would be different it might be yes but I didn't explore that but then of course you don't need this thing right you don't need the character optimization actually you can maybe do something else like because this works only with with necessitation and I'm going project or I'm going paper or a draft about paper that I should I should be working on is to kind of play around with this iteration procedure to get some other principles like for instance these guys right or some weak regularity principles and then see whether you can you know do something interesting here but coming from your perspective it can be even perfectly fine to come up with something that hasn't been even come up with right these are two different things and can you strengthen the logic in an interesting way this is I don't know and also my goal is to get to interesting logic at some point and keep failing like 3 years and like I'm feeling you answer very technically to my question which is very theoretical right I just wonder yeah but this is what logicians do sorry yeah I don't really think about this so you ask about my intuition about that oh oh cut so that's a tricky one I have like really slight engineer might of an engineer right so philosophers come to me with no we have these intuitions then do some magic with logic and like okay okay and these are because these actually looks okay yeah yeah yeah some spine no no no the system and so I think kind of like no for your study you wrote it right oh yeah yeah because well you need a bit of philosophical kind of motivation because otherwise you can't buy out the same well so there is this work that I done and there is absolutely nothing philosophical about it there is a bit of space but me personally I just don't like the when it comes to possible also I just hate this the aspect that the valuation is unrelative it kind of kind of wraps me in the wrong way for for for I don't know why it's just my intuition that this is not what I think when I think of valuations formulas and they are either true or false or whatever your values are but that's that's it there is no kind of contextualized possible word that you can maybe consider no that's no that's wrong and that's just to not to be no due to Kripke like oh yeah look these actions are true in those type of structures right but so that's that's the reason and as for the distinction between possible I want the logic so for me the negation works so unfortunately I don't think that these are the principle that are different right I think it's the duality actions are important because of the voices that's the way I think about that but that was like maybe this is another option it might be considered as as long as you have valid set of intuitions that kind of guides you then you can start playing around but the other way around might be slightly tricky because there are infinite many systems in between not of all of them are interesting questions if not I have quite some so first of all a kind of remark it sounds specifically a question or even a remark maybe interesting so because you were talking about the history of the logic and I found always very surprising that these quite technical proof theoretic systems were first like who comes up with these weird formulas boxes and diamonds and so on without these intuitions about how what a possible world is and what it is to access that we now have such a cryptic perspective on the whole concept of modality that it's kind of hard to even understand any more of what how people conceived of modality before cryptic it seems like it has changed the whole of analytic philosophy in that sense like in a fundamental way it's not even possible anymore to go back and see that's right exactly so do you have to block these intuitions then to go through this hard process like my personal intuition I kind of do how do you addetic something from Hustler I put it in brackets it's there but I'm not actually actively in my consciousness so I do this and then I can do in many values approach but actually quite often quite often the other responses like yeah there's no systems are technically interesting but I mean why do we need them we have cryptism but then I usually yeah but you can do more here oh yes but you can go cryptism is intuitive you can go for weaker systems as well and then like yeah but that's a strong one right because you are saying cryptism is intuitive for for the normal logics right and what if you introduce three types of possible words there in access the ones that are normal and non-normal and the third thing or the star words whatever then it's no longer that intuitive so it's there so it's also kind of reversed right because in cryptki you have this decent structure for normal monologics but if you want to go down you need to destroy something right you need to either destroy the relation the access to the relation works or you need to destroy the notion of possible world and then you can kind of incorporate more logic but here is so it's like bottom up bottom whatever so the order is reversed here you start with very weak logic and then you need to hammer it down kind of you know these valuations are wrong just remove them so you kind of because weak logic can go stronger whereas in cryptism there is something that's already quite strong and then in order to get weak and mean to destroy some of the decent properties do you know of any even more historical roots of sort of non-deterministic strategies because it's quite a natural thing to do and I wonder whether like in the middle ages or even in our total people have not thought about it well something can be true something can be necessarily true something can be merely contingently true these are different states of being like and then you can have think about how they relate to each other and without this concept of possible world then you automatically go into a kind of non-deterministic so it's a very natural strategy to do they know whether there is this historical very historical the short answer for the very historical no the longer answer is that currently I'm trying to let's put it that way a historical paper on the emergence of non-deterministic semantics and one of the pivotal points was the book that was supposed to meant for me so I mentioned the paper by Reshen in the 60 something and actually I'm writing this paper with Daniel and Daniel got a hand on that book or got hold of that book slightly before me and then he sent me a couple of pages saying look but Reshen is stealing those ideas from some guy doing quantum something from the other Polish guy who published something in French that I've never I'm like what's happening there's like a lot of things that are not explored which I here are so they ask me in six months and I'll know like but that's only 20th century right before that and then I don't know because that would be more interesting to me really old ones because then you don't have this preconceptions but those are also from the beginning yes yes preconceptions either but then we already had this five ideas and this four and so on yes but they were they did not I'm not so sure to be fair and B but they were phrased differently right they were not usually phrased by as stuff to do with modality they were phrased as the strict implication and this was just the mystic intuitions to do something about the actual language so I'm not so now for us it seems here you can just enter the find this implication with the box but back then they're not so sure they had this contradiction bus figure, Russell classical logic becomes established so at least they had this implication very strong when you look at you know the difference in middle ages between potentiality virtuality, actuality they don't even have a stable oh yes the classical logic it's only a bunch of weird stuff yeah that might be that could have been but this is something that I also don't have enough kind of expertise to actually interpret what you know the scholastic people had in mind I should have because I'm a political we do, there are two things that political logic, philosophy is the right Thomism and logic more more quickly so it seems like you could for philosophical reasons be interested in broadening this up a little bit to different kinds of modality where where this idea that becomes sort of standard while nobody explicitly endorses it kind of circles of modality right where you have like sort of mathematical necessity oh this and then physical you know anomic modality maybe technical feasibility or something that is still stricter like you could translate this in many true values like somebody something is necessarily mathematically necessary something is merely merely physically necessary mathematically necessary or something like that could have a different true value a lower true value so is there any work being done in not for technical reasons but for philosophical reasons extending the set of true values seems a very natural step to take once you can speak of different ways of being true okay so the idea is that not that I'm aware of so now what I'm working on is the notion of functional completeness in this setting and that's interesting topic separately and the group from Israel and they are working on making this analyticity back or again in this analysis Daniel is working on kind of pushing this idea of like alternative representation of these semantics in like relational way like FDE style so maybe that's something like similar but he is just like maybe not just but like just maybe not really innovative but restating the results in that kind of framework and there are some benefits and interesting things happening and yeah there is some kind of relation between FDE as well but this is still a working progress and Marcelo group from Brazil they work on generalization of that so this is the PASSE they are now working on this R&D matrices so no one is working on this and then a kind of very sort of maybe skeptical or destructive question but it's not done that way so there is of course like if you are open to non-determinism a sort of radical non-determinism possible once you allow filtrations to simply say there are two true values false and true and we are just going to filter out by the actions that are given to us all the the false all the impossible models according to these actions you see all the all the valuations that are not without any truth tables without any structure in the semantics itself so just like copy the truth the proof theory hard core or directly by just saying these models are not allowed and then your semantics is nothing anymore just copies the truth the proof theory well not necessarily you can still guide some of your proof theory results well maybe it can help but it doesn't provide any new information just two true values basically nothing the filtration is completely on the basis of of what the actions do the actions are proof theory there is no information added or something I don't know different insights gained so my question would be if this is like the thing we don't want for philosophical reasons because it gives no information at all it may guide us indeed but you clearly want more because you have more values than just the two you want to add structure well to be fair I have dimensional values for each of the dimensions I still have those so you I can say no there are only true values but the values are triples and at each of the place in the internal relation you have one of them but also I'm not sure if you start with this perspective of the proof theory or the semantics or whatever it's called that the only thing that you need is like there are two values you have plenty of evaluations and then you just on top of that you put some axioms and then you just remove the evaluations then yeah but where do you get the axioms in the first place well I'm basing my axioms on certain semantical intuitions about language about stuff so it's so there is some kind of semantics that is or pre-semantics that is your decision on adopting certain axioms based on and then there is your semantics and the problem is how this semantics and this semantics relate because it might be that it seems as if it does not give us any new information but in principle this already gave us the information and those two things are really connected so the shape of this semantics can be influenced by this pre axiomatic or intuition that you know allows you to adopt certain axioms sure but the decision could then be to just call that a semantics already the first step not see the pro-theory as something purely syntactic but that's really what what it means but then you can call this semantics but then you can still do the mathematical analysis of this stuff and then you are basically here right because then you know so how do you have the math like this well I am putting some axioms on top of that well where do you get your axioms from well so there is this intuition that I have so it's like a vicious circle in our way right yeah but so I guess my question would be if you can do it with two values why what is interesting about doing it with more than two what logic yeah any logic well because you can do it with two values right yeah but why would you need more why would you want more why would you want more structure in the matrices itself well so the short answer is that you can't you can't have only two values and the elegance is the supersity of logical matrix right if you want to have two values then you don't have something elegant as logical matrix you have corrective framework yeah but no I mean with matrices matrices that are restricted by axioms yes but then you can do a lot of stuff with two values yes but why would you want to do it more than two because there are certain contexts that two values are not enough like you are there for instance you are into schnistic you believe that you don't have this axiom you cannot do anything with logical matrix that has two values from technical no the one just put that right also get out yeah but without yeah but the filtration is not known and also it's not known whether you can find this type of filtration for that okay but once we would find a way to filter or that's awesome for me then yeah then just go for two values but for matrices the logic should be pretty simple to do that now you just have all combination of all formulas that might be one or two or false and then you just check does this valuation respect my actions does this valuation you just only keep us as correct those that do respect the wait wait wait wait I'm not so sure actually what can go wrong well like for instance the set of valuations maybe it's not going to be recursive so you cannot actually put them in check that's one of the things right and also the set of valuation that you are left out with at the end of the procedure we don't really care so you know that there is a set of valuations okay which one no one knows there is no algorithm so as soon as you provide me an algorithm for doing that and saying he likes function then I am happy for that but it's just I don't believe it's going to be that easy yeah unless you see and then we can work on that because this already is like straight forward what you are doing but then there is no algorithm right it's extremely non-trivial how to check if the valuation is actually going to be preserved and then for intuitionistic then I don't know how you would start first this filtration procedure you need to have the value that you are filtering against in our way so you need this super designated and designated but if you have only two values then which device are going to remove them I have some intuitive ideas but maybe they are very in the wrong direction but let's discuss that another time yeah but in principle there is this thing or there was this things of like starting with various types of actions and then checking which of the actions you can add in which you can so that the resulting thing it's like you start with empty logic and then you add multi-sponsors and then you try to come up with truth tables and then you add these what do you need to change in truth tables and blah blah blah but you cannot separate certain actions like you cannot separate the love the negation for instance and then there is the result by this the chief of all of the Russian logic Larissa Maximov and she has this result that there are infinite logic that you can actually like separate it but if you want to have interpolation theory there are seven so there are plenty of this like limited, limitative results with respect to what you can algebraically do with two values so it's not as simple as unless maybe if you find out more it's probably something I'm not going to be and other questions because I'm I don't want to no no no no I'm curious you said for a few seconds that you can see your approach and the filtration as establishing relation between possible world if we were doing some isomorphism between the two could you say more the short answer is now but that is an intuition because I guess just to give you where I'm coming from when I have to teach semantic of possible world it's extremely shocking for the engineer mind like you said is that there's no metric there's no relations between the stuff it's just they are there and so of course you have a relative evaluation for world because you don't have an established formula but what you're saying is that your formalism it's like establishing relations to some extent between possible world that can be formalized I think so it's a rough idea I mean that is fine and I'm already over committed but a rough idea is that so you have this zero level and then you have valuations and zero level that's at least that's slightly abstract and then you have the zero level and then you have the first level and then you have the double one and then you have the second level and so on but actually if we look at the these things these actually are maximum inconsistent sets of formalism so they are actually possible worlds so we have this set of possible worlds some are defined that corresponds to this zero level elevation and then the first level what we do we just remove some of them and then I and then you can kind of like try to impose the structure saying that what's the principle right if you look from the perspective of box operator where do you put that accessibility relation or how do you translate the formalism because for instance one way of doing that would be for each of the propositions of parametric increase one thing just go for the transition of the truth value right so if valuation zero for instance assigns to P then you just put in the possible world for instance P box P and then C because these are not going to be like we will not give you the possible world semantics because these are just weaker logic then maybe you could see what the filtration procedure does from the perspective of critical semantics but this is really vague because actually what it does it kind of relates personally it kind of relates the valuations in some way but the valuations are more some kind of structure here as well but if it's accessible exactly like time is finite unfortunately I'm not that quick and also I don't want to go Tarski way in the sense of I don't want to sniff cocaine to do research but this is not Tarski this is Autos but it shows a life in logic in the app so I wholeheartedly kind of recommend that a lot of interesting stories another question I would have is whether this link with Goedl's theorem that was apparently used in the first proof but that's Goedl's theorem for the lack of matrix semantics for intuition's technology that's Goedl's then my question becomes is there a link with Goedl's theorem because a lot of things look like it like Tarski in the findability the theorem is very close to Goedl's in Goedl's theorem and so on I don't work with first of the languages and this is something that you well in principle in my PhD I tried to use these type of logics for informal proofability so I was trying to establish the link with Goedl's theorem but it was a lot of work and I was not extremely well happy with the results so either I'm just stupid or these things are really far or impossible if you have some limited results often that says something about the expressability and about translatability to other well-known systems and I don't know a lot of with matrices you can do in a hidden way a lot of mathematics if you have the functions right you can do already pretty complicated stuff in there so it wouldn't be extremely surprising that part of arithmetic can be sort of by brute or in a hidden way handled inside especially if you have an infinite a number of values because I guess that's excluded but that's also one of the points why a stick to finite linear values is that if you go for infinite linear values the nondeterministic semantics is equivalent to matrix semantics and you don't add anything by going on a stick that's interesting I didn't know that already from just Omega infinity I think in the paper they just said infinite it's I assume for all elements it's Amnanski and Abram they did prove this then they become equivalent but the interesting parts are if you keep this set of values finite because then you actually increase the this is actually what Russia didn't see because then he thought that you can actually represent the nondeterministic stuff but that's another paper that is being written here so the idea is that I think of this or I think that Russia's idea was that if you could never reform this choice sorry like let's say let's go and use this right and basically what you are how you would represent this is by adding new value to your semantics and then you just instead of four value semantics five value semantics and then the few values nondeterministic value that's right no problem but it's not clear if there is no problem and I think that it is not a problem for certain specific class but I didn't prove it yet namely it is not a problem as slow as the values are of the same polarity let's say what are designated or what are not designated then I think you can actually add any such combination as additional value and then just translate the valuation function as this but if you have this thing of opposite polarity then if you have this then what you will go for designated or not designated because nondeterministic something sometimes behave as designated sometimes as not designated and for those I think you can prove that you cannot if as soon as you can represent one of these then you cannot determine that nondeterministic matrix but I think I know how to prove it but I didn't have time because I needed to teach after now I won't like prepare this slide that would be really interesting yeah but that's working progress it's like already up a page so the theorem is stated in the graph time is up but we have two minutes left all right now it's a review I was just thinking about this I think it's more interesting that excuse me it can be an invitation but just share share let's imagine you have a differential equation we have a theorem saying that the solution exists but sometimes you cannot find the solution there is it it's a non-constructive theorem maybe you can use good theorem for apophantic equation just to show that the answer exists but you cannot systematically prove it so maybe we can just make a difference between being necessary and necessary is all the proof which are non-constructive and possible which are non-constructive and it's related to both theorem and intuition that would be a nice framework that was my idea to approach intuitionistically where I used capital T so either there is informal proof sorry, either there is direct or indirect proof I failed I didn't the problem one of the problems that I had is that here you need to so the idea is that you start with some weaker logic and then on top of that weaker logic you build the iteration and then you arrive at the stronger logic so in the case of intuitionistic logic the stronger logic would be intuitionistic but then it was going to be the weaker logic and then I tried what is the positive fragment of something is it minimal logic code one of the decently behaved fragments of propositional logic without this double negation and then I was like yeah I'll just add nothing about negation and then I'll just use negation and do the filtration according to negation and almost for free because all the theorems probably will be more or less the same but then it turns out that the starting logic is already too complex and I couldn't find the three basic semantics for that point so that actually was an iteration that I tried to to start with but I failed and then I just started doing these things ok thank you very much