 It's my pleasure to introduce Professor Moshe Vardhi, who is visiting us, has been with us for the last six days. Professor Vardhi is the Karen Ostrom-George Distinguished Service Professor in Computational Engineering and Director of the Kent Kennedy Institute for Information Technology at Rice University. He is the Co-recipient of three IBM Outstanding Innovation Awards, the Gettle Prize, the ACM Kanilakis Award, the ACM Sigmund Cord Award, the Blaise Pascal Medal, the IEEE Computer Society Good Award, the EATCS Distinguished Achievements Award, and the South Eastern University's Research Association's Distinguished Scientist Award. So you know, he has a long list of other credits, other awards to his credit. So let me not go into that. Professor Vardhi has actually made seminal contributions in diverse areas of computer science, theory of databases, one of the founding fathers of finite model theory, temporal logics, you know, so many different things. So today he's going to talk about a very interesting topic, the rise and fall of linear temporal logic. In fact, he's credited with one of the best algorithms for model checking linear temporal logic, and he's going to talk about the rise and fall of it. So Professor Vardhi. Okay. So for now when I'm studying the monadic class, which is first order logic, we have one binary predicate, it's equality, everything else is a monadic predicate. Now you can still say things such as if all xp of x and for all xp of xp of xm then it follows it for all xp of x. And if you know a little bit about the history of logic, then it goes back to the philogism of Aristotle, this was all about monadic logic, not even about the quality. Only late in the 19th century people start looking at the higher RET predicate. And love when I improve that this class is decidable. At this point there's no yet undecidability. Later we find out that full philology is undecidable. This class is decidable. And the techniques that he used are very basic techniques. We use them until this very day. You prove a bounded model property. Given a sentence, if it is satisfiable, there is a model that cannot be too large. Therefore, you can systematically search for such a model. And for this, you use quantifier limit. If you limit quantifiers, you reduce it essentially to propositional logic. And a few years later, Scholar improved that this even works for second-order monadic logic, where you can quantify additionally over predicates. So this is also same technique works for both. Now, they allow, as I said, one binary predicate, which is equality. And you could ask what happened if you add inequalities. And this had to wait about 40 years. And in the 50s, people did study what happened when you add linear order to the monadic class. And out of it came a beautiful connection with automata theory. So let me remind you kind of on standing on one foot the basic theory of automata. So we are talking about non-domestic finite automata, where we have a finite alphabet, a finite set of states, a set of initial states, and non-domestic transition function where given a state and a letter, it gives you a set of success of states and a set of accepting states. Given an input word, Iran is a sequence of states where you start in some initial states. At any given point, you have a set of possible states. You guess one of them, and you make a transition. And the final state has to be accepting. So this defines the notion of acceptance. And this gives us a language, the language of what is accepted by this automaton. This is a very, very, very basic model in computer science. In science, it's one of the most fundamental model of computation, finite set machines. For example, here is a little example. This automaton has two states, left and right. You enter on the left, and you can see that zero always take you back to the left, and one always take you to the right. And the right is red circle, so it's the accepting state. So to accept, you must reach the red state. It means the world has to end with one. So the language described by this automaton is the set of all worlds that end with one. And we know that NFAD defined a class of regular languages, which is a very class with many, many wonderful properties, very robust class, the class of regular languages. Now, in the 50s, people realize that you can think of a world as a mathematical structure. What is a mathematical structure? It has to have a domain. We will take as domain a set of points in the world. We need relations. We will take one binary relation to compare positions, which position to the left of which position. And we have monadic predicates. So you know the relation. For every letter, we need to know where is this letter. At position three, p sub 1 will be true if the letter 1 is written there. And now we can say, we can write sentences about this structure using the atomic predicates and the p sub a of x and x less than y. For example, if I say there exists x such as for all y, not x less than y. So this means x must be the right most position because there's nothing to its right. And p sub a at x, that means that the last letter is a. And in fact, we can generalize farther and add quantification over a set of positions. And this will give us MSO with inequality. So remember, we have MSO with equality in the 1910s, and now we have an MSO with inequalities. And independently, three people, one in the Soviet Union, two in the United States, prove independently that MSO and NFA have the same expressive power, the equivalent, the both define a class of regular languages. And the proof that they have is constructive. On one hand, given an NFA, we have an algorithm that produce a sentence that says, there exists an accepting run. That's what you need to say. So there exists a run that's a monadic predicate. And then you need to say, it's an accepting run, and these are first order constraints. So NFA to MSO. In other direction, you go from MSO to NFA, and you do it by induction on a structure of the formula. So you have atomic formulas. You show they have small automata that can correspond to them. Then you have to show closure under this junction. That's union. Existential quantification. You may remember if you took automata theory course, that corresponds to projection. And negation is complementation. And if you remember in automata theory, regular languages are closed under union, projection, and complementation. So you can do inductively. You start from little automata, and you inductively build automaton for the whole sentence. Today it's a fairly elementary proof. In fact, it's possible to cover it in an undergraduate class. It really doesn't require. There's no rock and signs there. Nothing too sophisticated. Well, actually I'm rewriting history a little bit. They use DFA. But what they had to do when they have DFA, and you have projection, it's when you project from a DFA, you get NFA, and you have to terminize again. So somewhere you have to pay exponential cost, either in the negation or in the projection. You can do it either with DFA, which is what they did, or with NFA. NFA was only introduced by Rabin and Scott in 59. So I'm cheating a little bit. But it's equivalent. Good catch. Now, what is this translation good for? On one hand, it's just a beautiful theorem. It shows us the two very, very different-looking formalism are equivalent. It tells us something about the robustness of the concept of regular languages. But beyond that, we get actually concrete algorithms from it. So let's start to see what algorithm we get. So on the automata theoretic side, the most fundamental questions about automata is, given automaton, does it accept anything? Is the language non-empty? If it's an empty automaton, we throw it away. It's not going to do you much good. This is the non-emptiness problem. Now, you can go back to this picture, and you ask, is this automaton non-empty? And the answer is, well, of course, you can go from initial to final states. So of course, it's non-empty. And in fact, that's all you need to be able to do, of a initial to final state. And we can formulate it in the following way. For each automaton, we can construct a graph, a directed graph. The nodes are the states. And you have an edge from S to T if there is some transition from S to T. If you consider for automaton as an h-dabel graph, delete the labels, and you end up with a directed graph. And the lemma, which is really a trivial exercise, says that the language is non-empty, even only if there is a path from initial to accepting states. And so this is just graph reachability. This is one of the most fundamental algorithms in any algorithm course. First, you do sorting, then you do graph reachability. And you can do breadth-first or depth-first. It doesn't matter. It's a linear time algorithm. Now, on the logic side, the analogous question is, given a sentence, does it have a model? Does it describe anything? If it has no model, again, not a very interesting sentence. And this, not clear how we even start to approach this problem. We get a complex sentence, quantifiers, Boolean operators. How do we even start? And on the first of it, it's not clear how you would solve it. But the automata logic connection gives us an algorithm, right? It says to solve the problem, you have to realize that size is satisfiable, even only if the automaton is non-empty. Because automaton describes a language, and the sentence describes a language, a set of models. So this is one and the same. So for the algorithm is, take the formula, apply the inductive algorithm, translate it into an automaton, and then check for automaton from non-emptiness. The second step, checking on emptiness, we know it's trivial, it's linear time. So what's left is the first part, translate it into an automaton. And now we need to look at the complexity. And we realize that as you do this inductive construction, union for NFA is easy. And projection for NFA is easy. But complementation for NFA, you have to determinize first and then complement, and that's exponential. And because you may have these nested operators, what you get is a tower of exponentials, and the height of the tower is unbounded. And at the time, people didn't know whether you can do better than that. But about 17 years later, in 1974, Larry Stokemeyer showed you cannot do better. This is inherently, what we say, non-elementary. There's no bounded height power to bound the complexity. This is among the worst complexities you can imagine for a decidable problem. So we have a beautiful theory, questionable user application. Now about the same time, Alonso Church, whom we know Alonso Church from Recursive Function Theory and Church Thesis and Landau Calculus, he gets interested in circuits. Do we have electrical engineer here? Alonso Church is one of the first electrical engineers. And he started to study the model of sequential circuits. And if you're an electrical engineer, this is the Turing machine model for electrical engineer. This is the most fundamental computational model. What is a sequential circuit? It's essentially a circuit with memory. So in this circuit, you have a set of input signals and a set of output signal. And a set of sequential elements, you call them as one-bit registers. These are elements with memory. And you have two logic functions. One is a transition function that says given the current set of the registers and given an input assignment, what is the next set of the registers? How does the circuit update its state? And the other one is the output function. Given a register assignment, what is the output? And this is what we call the logic of the circuit. This is, in some sense, people who really work mostly with logic are not computer scientists, but these people who do circuit design. They work with logic all the time. And we need also initial assignment. And how does this model compute? Well, it doesn't compute by itself. It has to be driven from the outside by fitting it with a set of input vectors. So we start in some initial state. This is r0. And you give it a sequence of input assignments. And it makes transitions accordingly and made output accordingly. So then you get a trace. Each element in the trace is a triple of input, output, and registers. And it progress driven by the input vectors that come from the outside and the logic of the circuit. Now, again, Church realizes that you can think of an infinite trace as a mathematical structure. Remember, a structure is a domain and predicates. So what are the domain here? The domain here is an infinite trace. So the domain are the natural numbers. We need to be able to compare points in time. So we have less than. And every circuit element can be thought now as a monadic predicate. Because at any given point in time, this circuit element is a high or low, two or false. So every circuit element is a predicate. So again, now we can use first order logic. For example, we can say for all that exists y, such that y is greater than x, and p is true at y. If you think what it means, it means that p must be true infinitely often. At any given point, you can go farther and you find p to be true. So this say p must be high infinitely often. And again, we can quantify over sets of points that will give us m and so within our order over the infinite line. Now, Church goes to formulate a question that today in modern terminology we would call the model checking problem. Given such a circuit, it has many, many traces. Each sequence of input vectors would drive another trace. So given a formula that we describe one trace, is this formula true in all the possible traces of this circuit? This is exactly what we call model checking, which is we want to know does the circuit always behave in a particular way, except here instead of using temporal logic, m and so is the temporal logic. It's a language of traces. Now, Church did not solve it. He just observed that you can use first order logic to encode the logic of the circuit. This function, f and g, you can encode in first order logic. So he said, basically, you can reduce it to satisfiability. And he left it as an open question, the satisfiability for m and so within our order over the infinite line. And that problem was solved a couple of years later by Richard Bushy. And what he did, he took classical automata theory, which is on finite words, and extended it to finite automata on infinite words. Today, we call it a Bushy automaton. What is a Bushy automaton? Well, a Bushy automaton is an alphabet, a set of states, initial state, position function, and accepting state. So it's actually not any different than any NFA. The only thing that's different now we're going to feed it an infinite word. And as a result, the run is going to be infinite. The automaton really doesn't know whether it's finite or not. All it knows is, I'm in a state. I get the letter. I go to another state. But now we're going to get an infinite run. So the concept of acceptance has to be revised. There is no point of decision anymore. It's not do it, get to the end, make a decision. You run forever. So Bushy proposed the following concept. You say, if you visit an accepting state infinitely often, we said that this is accepting. This is a limit condition. If you infinitely often visit an accepting state, then you accept it. Then let's go to the same automaton we have before, where you enter on the left, 0 takes you to the left, 1 takes you to the right. Right is the accepting state. To accept, you have to visit the right state infinitely often. That means you must infinitely many ones. So the same automaton that over finite word defined the concept of one must be the last letter must be one. Now it says you must see infinitely many ones. The same way that NFA defined a class of regular languages, Bushy Automata defined a class of omega regular languages. This is a class of languages over infinite words. And it's a class that people are less familiar with, because we don't usually in an undergraduate curriculum, even graduate curriculum, not routinely go to the automaton of infinite words. But there's a rich theory. And omega regular is the most fundamental class of languages over infinite words. Now Bushy proved that the logic automaton connection extends to infinite words. So the same way that MSO is equivalent to NFA over finite words, MSO is equivalent to Bushy Automata over infinite words. I will only mention one direction here. The two directions apply here. I will just mention really the hard direction, which I would call the compilation theorem. So given an MSO formula, you can build an automaton that accept precisely all the words, the infinite words, that are satisfied by the formula. So we can think of it as a compilation theorem. It says you have a high level formalism logic, and there is a mechanical way to compile it into very low level, automata are very low level, right? It's an assembly level. You're in this state, go to that state. It's a very basic, very low level model. So we have, again, a compilation theorem. Once we have the compilation theorem, we can, again, try to solve the satisfiability problem. Now, if you want to do it for automaton, it's a little more difficult, because it's not just going from initial states to final states to accepting states. But it's not very difficult to show that what you need to be able to do is go from initial to an accepting states and then cycle back to the same states, because you have to visit such a state infinitely often. So now this is called a lasso. It's a soto shio. He put a picture in 2005. He put a picture on the web, which I've been using, I've been using since then. And I did know that it was his. So this is called the lasso test. And so again, we have the same algorithm. Go from to check satisfiability for logic, translate it into an automaton, and check no emptiness for the automaton. So Church was very happy when he heard about it. So the problem that he wanted to do to solve, model checking, was solved. And in a lecture that he gave in 1960, he mumbled something that's not a very efficient algorithm. It's interesting, because this was really before people started seeing what complexity. And only in 1974, Larry Stockman would really show that this is not elementary. But already then, Church realized it's not the best algorithm in the world, because they realized there are these exponential explosion, repeated exponential explosion. Now we go to the end of the world, literally the end of the world, Christ Church New Zealand, the most southern, I think, habited city in the world. And we find there a philosopher, Arthur Norman Pryor, real philosopher. He's interested in religion throughout his life. He was born a Methodist, changed to Presbyterian, became an atheist. And before he died, just to be on the safe side, became an agnostic. Interested in logic, in ethics. He wrote a book, Logic and the Basis of Ethics. Now, if you're thinking about ethics and logic, there's one colossal problem, the problem of free will. So why is that? Because we are morally responsible, because there is a sense that we have agency, that we make decisions out of our free will for good or for bad. So we think that we can influence how the world will unfold. I can decide to scratch an ear. And I will choose, OK, it's going to be the left ear. And I will scratch it with my right arm. I made this decision here. I could have done it the other way around, and the whole world would unfold in a different way. Now, on one hand, especially I think the Methodist believe in predestination. It means God has already determined how the world will unfold. It's all being predestined. But then it's not, then we are just puppets, right? We're just going according to the heavenly script. So there is some attempt to reconcile it. And it says, you have free choice, but God knows what you're going to choose. It's called foreknowledge, OK? So yes, you have free choice, but God knows what you're going to choose. Now, if you just think about it for 30 seconds, it's a little problematic, just logically. But to formalize it, you need some way to think about how things unfold in time. And so a prior realize, you must think about time. And in December 1953, he came up with a concept, logic of time. And after he died, there was an interview with his wife, and she remembered distinctly they had separate beds, and she was already asleep when he came and walked her up. It says, wow, I have a logic for time. So I like this paragraph very much, very nice. You wake up your wife whenever you have a new result. And he ended up publishing a book a few years later, Time and Modality, was a book that really started kind of whole research area. Today we have, I don't know how many people work in this area, all goes back to Arthur Norman Pryor. Now, when Pryor first talked about time, going back to his third lecture in 1954, then he gave lectures at Oxford. The book came out. For him, time was very simple. Time was just unfolding in a linear fashion. Arnold Toynbee, a famous British historian, said, history is just one thing after the other. This happened, and that happened just like unfolds. But then after the book came out, Pryor gets a letter from Saul Kiripke. Saul Kiripke is one of the foremost philosophers of the 20th century, still alive and well. And Kiripke writes, lastly, in an indetermined system, we press should not regard time as a linear series as you have done. Even the present moment, there are several possibilities for the next moment may be like. And for each possible next moment, there are several possibilities for the moment after that. Thus, the situation of the form, not of a linear sequence, but of victory. Amazingly, this is not Kiripke, the famous philosopher. This is Kiripke, who is about 17-year-old in the high school student in Omaha, Nebraska. Very precocious guy. So Kiripke is the one who invents branching time. And that means that we have a choice between thinking of time as unfolding in such a way as linear lines, traces, or time unfolding as a tree. Big choice, philosophical choice to make. Now, remember, Pryor was about free will. And free will is about the things that not only that you do, but the thing that you could have done. So from his point of view, tree was the right thing. So he actually accepted, he immediately accepted the Kiripke suggestion. He kept the previous syntax. So now the syntax could interpret a linear syntax over a tree. It became, I won't go into detail, there was something called the Persian approach and the Occamist approach. Very, very tricky to do it. Today, I find it impossible to read his book. Yes. So this is a literary question. I mean, people have, there are movies about this. Even there was a British movie, Sliding Doors. And it exam looks, what happened if you run to catch the train and you miss it and now there is a whole other future? Will the two future come back together or not? But that required, now you're asking, is should we think about the tree? Should we think about the duck, perhaps? But partly the point I'm trying to make is that it's very hard today to read Kiripke. Because when you open the book, you find formulas like this. And I copy this, just symbol by symbol from the book. I have no idea what it says. But they do not have good fonts. So they use, even for logical connective, we use today wedge and V or ampersand. They use Roman letter for everything. And also this use prefix notation. So I open the book today and I just cannot read it at all. But I read somebody else who tell me that Pyer would have agreed. The terminus C time is a line. And in terminus C time is a system of forking pipe. You're in good question. Maybe they should come back. But that goes beyond what we are going to do. Now even if you accept branching time, philosophically still is very tricky. So Pyer argued that the nature of the course of time is branching. But the course of events is linear because one thing happened after the other. But Recher, another philosopher who started this, says no. Time is linear, but the course of event is branching. And he said we have branching in time, not branching off time. Now I have read it many, many times. I have no idea what I was saying. To me, it's completely obscure. And I even ask Recher, I don't think he's not alive anymore. I ask one of his students, can you explain it? He said no, really. But while some philosophers were happy to have a richer structure like branching time, many were happy to continue to study linear time. Some of them important name, Hans Kamp. I'll mention him shortly, Dana Scott. I'm sure many of you have heard of Dana Scott. They continue to study linear time. And in particular, Hans Kamp asked the following question. What is the relationship between the four elements we are studying now, linear temporal logic and classical logic, first order logic, monadic second order logic? You can define first order logic and MSO on the line of time. And he proves equivalence. He proved in 1968, this is PhD thesis, that if you take linear temporal logic and you add binary connective, since and until, we'll see in a few minutes what it's since. What is until? Until since it's a dual, you get something which is an expressive power as first order logic. So temporal logic essentially is first order logic, expressed from an expressive point of view. Now even the philosopher, even the philosopher, this is in the 50s and the 60s and think, ah, there is this computer. This must be relevant to computers. So Pyre himself says, there are practical gains to be had from the study too. For example, the possession of time delay in computer circuits. Wow, I don't know how he knew then about time delay in computer circuits. In Russia, in Urkut in 1971 wrote, there must be application to processes which are program sequences of state, deterministic, or stochastic. But the real serious connection to computer science came in 1977 and this is due to Amir Pnouelli who is not with us anymore, he passed away in 2009, five years ago. And Amir Pnouelli then started in the mid-70s. So the 70s were early 70s, there was a lot of work on, you know, whole logic started in 1969, a lot of work on reasoning about programs. And Pnouelli started thinking, what happened, all this work about whole logic is about problems that have input and output. Pnouelli started thinking, what happened if you have machines that just keep running forever? And this laptop does not really have input and output, it's interact with me. You have microprocessor, you have protocols, there are many, many systems that interact with you. So he said, we need to reason about not just input output, but about the whole evolution of time in an unbounded fashion. And he looks for a logic to do it, and he ends up discovering temporal logic. And the way he told me the story of how he discovered it, he had a paper with a student in 1975, and he used first order logic, and he said he was very unhappy with the logic. And he gave a talk, and someone told him at the talk, after the talk, you should take a look at the ontic logic. The ontic logic is a model logic of permissions and obligations. He knows nothing about it, so he goes to the library, open the book in the ontic logic, spend an hour reading, and he says, this is completely red herring, there's no connection. And he closes the book, and as he closed the book, on the back it says, other books by the same publisher. And it mentioned the book on temporal logic by Reshul-Urkuhov. So he goes back to the shelf, it was just the next book in the shelf, he picks up the book, start reading that book, and he says, that's exactly what I was looking for. So this is what's really called serendipity. So he then proposed to take linear temporal logic as a language to talk about non-terminating programs. Let me jump just a little ahead to give you what is linear temporal logic. So you start from propositional logic, and you add temporal connectives. You can say, this is true today, and this will be true tomorrow. Today it is not raining, but tomorrow it will be raining. So you have next time, we talk about the next point in time. We're talking here about discrete time. There's a whole other logic for some people. No, but continuous time. But here we'll talk about discrete clock ticks. You can say something will happen eventually. Something is always true. And you can even say, phi will be true until psi is true. And I will not give formal semantic, just I'll show you the picture. So next, simply talks about the next point in time, just one tick forward. And until, if you have phi until psi, it says phi will be true, true, true, true, true, until at some point psi is true. So you say, I will stay hungry until I eat dinner. So, Noeli proposed using this logic, and he again talks about model checking, and he says, oh, we can reduce LTL to monolithic and other logic. Of course, he knows that complexity will be horrible, but at this point, he's not really worried about complexity at all. He just wants a logical framework. Let me show you a couple of examples that you can say in using LTL, linear temporal logic. And I will follow the biblical edict. So usually we say, you want, when you talk about system, you should say bad things should not happen, and good things should happen, okay? Depart from evil and do good. So a bad thing, for example, the violation of mutual exclusion. Two things should, sometimes there are things that should not happen at the same time, and you can say always note critical section one and critical section two. It's a bad thing that two processes are in critical section at the same time. Or the thing that you want to solve at the time, if you make a request, then eventually it will be granted. So for example, if you want to ask a question, then here then you raise your hand. But actually the protocol is not very clear because I cannot see all the room at the same time. So if I'm looking at you here, I may not see when you are there. So how long do you have to keep your hand in the air? After a while, about five minutes, it's going to start hurting a little bit, okay? So you can say, okay, maybe I just raise my hand and this is enough. Or you can say, if you raise your hand, keep it in the air until the request is granted. So maybe it's not so important for us here, but if you have two components on a chip and one is trying to get attention from the other one, we need to have precise description of the protocol. Does the semaphore stay on or not? We have to know that. Now what is the expressive power of LTL? So the framework is slightly different than what Hans Kamp was using. I won't get into exactly the details. Hans Kamp was looking over the integers, here were the natural numbers. So Knoeli recorded some very heavy duty logicians to help him in his questions and the answer came back again, the equivalent to first order logic. Interestingly, independently, Wolfgang Thomas in Europe was looking what is the expressive power of first order logic. And he says it's equivalent to regular expressions without star, star-free omega-regular expressions. So that tells us that they're really, we have a little hierarchy with two levels. First order logic is LTL, and these are the star-free omega-regular expressions. And MSO is more expressive because it is omega-regular, it's all omega-regular expressions with star. So it's a simple two-level hierarchy. Now in 1980, Albert Meier, a well-known logicians, computer scientist at MIT, gives a talk called 10,001 Logics of Programming. And he puts down LTL. He says, look, just first order logic. So the corollary due to Meier, I have to get in my controversial amount, these are his words, I'm just quoting verbatim. Is the debt result make it, means LTL theoretically uninteresting? So he said it's first order logic, why do we need another logic? Now what's amazing about it, Albert Meier made his fame by studying the complexity of logical theory. So the natural question to ask here is, F.O. by Stockmeier, first order logic, is non-elementary. LTL is equivalent to first order logic. What is the complexity of LTL? Equivalent does not tell us anything about complexity. If anybody should have asked that question, should have been Albert Meier. But he was too busy getting in his controversial remark to think about a natural question to ask. And within a year, people showed, in fact, LTL is much, much better computationally. First World Press showed it is in exponential time. And a year later, independently, people show it's actually LTL satisfiability, its p-space complete. And the basic technique is technique of tableau, which I'll come from more the logic. So first order law is non-elementary, LTL is p-space, p-space is practically polynomial time for us, if you saw law in complexity compared to non-elementary, this should be celebrating its only p-space. Now, Pnuelly was focusing purely on a Hanskamp head until n since, he also talked about the past. But Pnuelly first paper talks only about the future, he said we should talk about how the program is going to unfold. But then in 85, together with his students, he said, well, should we also talk about the past? Should we also have connective that allows to talk about the past? And he showed that it actually can be useful. For example, you want to say always if you receive a message, it must have been sent earlier. What he was able to show is that in fact, this does not give you extra expressive power. The same expressive power. You do not pay more in complexity, it is still p-space complete. They did not know exactly about succinctness and this has to wait almost more than 15 years until Nicholas Marquis showed that LTL with the past is exponentially more succinct than LTL. So this is almost seem like what we call a freelance theory. You have a logic, it's semi expressive power, not higher complexity, but exponentially more succinct. So you get succinness without paying a penalty in complexity. Now, I said that Pnuelly's paper was kind of a big bang in our area, and other one was two papers around 81, 82, it introduced model checking. Model checking is algorithmic technique for checking does the program satisfy a formula. And the first two papers dealt with another logic called CTL, we'll see what CTL is in a few minutes, and they show that the complexity is very nice, it's a linear in the size of the program, of the stage space of the program, and linear in the size of the formula. Now Pnuelly himself liked LTL better, so he wrote the paper in 85 about what happened if you do model checking with LTL, and he showed that it's a linear in the program, but exponential in the formula. But he and his co-authors argued that the stage space is usually very large, we're talking about the stage space explosion, and the formulas are fairly small, so we can live with exponential in the formula. Now again, the technique was tableau. So now it looks where in the early 80s, and it look, well, you have a choice, do you want efficient algorithm use tableau? Tableaus are kind of veristical, they are somehow less intellectually, they are less satisfying. Automata give us a beautiful theoretical framework, but they seem to give us non-elementary complexity. Can we bridge this gap? And Pierre Wolper and I were working at the time in Stanford and we were very intrigued by that, and we were able to show that you bridge the gap by going directly from LTL to automata. So we show before, you can go from LTL to first order logic, and first order logic is the fragment of MSO, so you can use that. But that's going to cost you non-elementary blow up. If you go directly from LTL to automata, the blow up is only exponential. And normally we think of again, exponential is bad, but remember, we start from non-elementary. So one exponential was really good use, and in fact you cannot do better than one exponential. And all the other result that you want like LTL satisfiability and model checking, they fall out very easily from that translation. Once you have that translation, you have all the ingredients that you have to have optimal algorithms. And in 88, I was able to show that this is true even for LTL with past. You have to work a little harder because when you have a past, you need two-way automata, but nevertheless this can be done. Now all this was purely theoretical. Really, I just wanted to publish nice papers at the time. But the question arose, is this a good basis for practical algorithm? And that took about more than 10 years to look at all the algorithms and do good algorithmic engineering and find good algorithm for model checking and for LTL to automata compilation, all of these. And by the mid-90s, we had two model checkers that can handle LTL. One was SPIN, which is the developed by Gerard Holtzmann, and it uses a language called FORMELLA for modeling protocols, and it can use LTL. And the other one was SMVD to Ken McMillan, again does symbolic model checking with LTL. Now around that time, around 1995, church at a very old age, I mean late 90s, he was, I think, 96, he died, he passed away. And I never met him. I never had a chance to meet church. But I kind of, because it's happening so close, I kind of imagine sitting in him by his deathbed. This is an imaginary conversation. I said, you know, this question that you asked in 1957, we finally nailed it. Now we have effect tool to do that. And he would say in his last voice, not quite. You're only doing LTL, which is first-order logic, and I ask about MSO. And we know the MSO is more expressive than LTL. So indeed, that question occupied us for quite some time. Can we enhance LTL to get expressive power off MSO, fully omega-regular? And we came up with various devices. And devices was, Pierre Walpole proposed, using automata in the language. Grammar operator we called it. And then we used automata in the language. And then we studied what happened if you had second-order quantifiers and what happened if you had fixed points. And by the end of the decade, by the end of the 80s, we know there are two ways to enhance LTL and retain the p-space complexity. Using quantifier was a bad idea. It doesn't work. But either you put automata inside the logic, I won't say exactly how. Or you add fixed points, alarm, you calculate. If you have heard of the new calculus, these are fixed points that you add to the logic. There are two ways to take LTL and make it more expressive. And you retain the exponential compilation property, which means you get optimal decision procedures. Now completely independently, von Prats developed a dynamic logic. And interestingly, his story, how he developed dynamic logic, is very similar to Noeli's story. He gives a talk about whole logic. And somebody tells him, you should look at moda logic. And he knows nothing at this point about moda logic. But von Prats was a voracious scholar. Once you tell him moda logic, he'd go and read everything about moda logic. So he gave and just read all the literature. And he takes the box of moda logic. And the box of moda logic mean necessarily, which of course has many, many interpretations. What does necessarily mean? Epistemic and belief and all kind of interpretation. And he says, well, let's say that the box means after one step of the program. And then he said, why don't we put the program inside the box? So box alpha now means after you execute alpha, box alpha mean after you execute alpha, we must be true. And this is the basic really for whole triple. So the whole triple, psi alpha phi means psi implies box alpha phi. So instead of just being a language of triple, we get the full logic. And this has become known as dynamic logic. And people also study the Von Prats was looking at the first order version. And people proposed, first another proposed the Boolean version of it in which you have atomic propositions. And what are propositional programs? Regular expressions. And that language was shown to be p-space complete. The Von Prats developed it. They shouldn't proceed using, again, tableau. Now, so now we have these two developments, dynamic logic on one hand. Dynamic logic is about input-output programs. Temporal logic is about ongoing programs. People try to take dynamic logic and extend it to ongoing programs, but they get very clunky logic. It was not a successful logic. But dynamic logic is clearly branching. And a temporal logic at this point was linear. So people start studying branching temporal logic. And again, there is a line of work. And ultimately, I think the final logic to meet the ultimate branching logic is CTL star, which has explicit concept of past quantifiers. Because in branching time, you have many futures. So if you want to talk about that, there's no that future. You can say, in the future, something will happen. You can say, in all futures, something will happen. In some futures, something will happen. So you need quantifiers. You need to quantify over the future. And I'm convinced, at this point, prior to the long dead. And in his grave, he kind of say, oh, I missed that. I should have thought about past quantifiers. Because it was clearly an element that was missing in his logic, talking about different futures. So now we have this big debate in the 80s between the linearists and the branchists. So the people who like time to be simple, just linear. And now you can use LTL. And you say, always request implies eventually grant. Or you think about time as a tree. And now you cannot talk about the future. Now you want to say the same thing you have to say in all futures, always. If you make a request, then in all futures, eventually it will be granted. You have to put quantifiers all the time. You get a logic called CTL. Now immediately people say, can we combine the dynamic world with the temporal world? And again, many, many, many papers. And I won't go through all of them. But the interesting thing is in the late 90s and early 2000, this is actually done in an industrial setting. So folks at IBM, Haifa in Israel, take CTL and they add regular expression. First they have one logic, which they call regular CTL. And then they develop it further. They get a logic called sugar, which is CTL plus regular expression. Now we start talking about the, now we are reaching the industrial phase of the story. So Intel started using model checking in 1990. They had a pilot with Bob Cushan from AT&T. And he was able to use model checking to find a very, very tricky bug in cash coherence. So they start actually building their own model checkers. They use SMV first. And they develop their own language to describe properties developed by engineer, but using linear time. But a very clunky language developed by engineers. And then in 1997, they decided to have their own home-built engine. And so they built a model checker. They built a BDD-based model checker. They built a Cybers model checker. And they develop a language called force spec. I was involved in the development of force spec. In 1997, they asked me if I would come and help consult with them on developing a language. And the first one says, why not use LTL? Nice little language, simple semantics. And the answer was very, to me, it was a real epiphany, a real revelation. They said the language is not expressive enough. Now these are people in the industry. What do they mean by that? They had no idea this is first order and it's weaker than MSO. What they meant is, our engineers want to say some things and the logic is too weak for them to say. So for me, it was an epiphany because I thought before the expressiveness is a theoretical issue. And what I learned from that expressiveness is a human factors issue. Can you say the thing that you need to say? And they say LTL is just too weak for us. Now of course, I had all these papers that say how you can augment LTL and get more expressive power. I said, oh, here are the papers. You should read my papers. You can put automata in the logic and you get a full expressive power. You can put even better fixed points. They're beautiful, really trust me, they're very beautiful. And they don't buy it. And basically they said, we have to, the people who use it are people that only have bachelor of degrees. And this is too sophisticated for them. But we still made enough progress. They asked me to come again the next year, the next summer. The next summer when I came, I was a bit more educated, more what we call house broken. And even though I love mu calculus and fixed points, I did not even dare to suggest them anymore. But I still thought that automata finds that machine, there is a chance. And again, I propose ETL with the extent, temporal logic standard with automata. And again, I was told, this will not work. But then Avner Ladver, who worked with me, and he knows about the work at IBM, he asked me, what about regular expressions? I said, you realize, of course, the regular expression on equivalent to automata. He said, really? I said, yes, I'll show you, it's in the book. It's in the book, the equivalent. So I said, do you mean to tell me that users will object to automata, but they will accept regular expression? He said, absolutely. Everybody loves regular expressions. I said, we have a deal. So the logic that we develop, and we call it RE-LTL, is LTL plus regular expression. What is this? It's really taking the modality of dynamic logic. It's taking box E phi, where E is the regular expression. But everything interpreted over the line. In model logic, you interpret it over a creeper structure. Here, just the trace. So what does box E phi mean? For example, if I say box, true star, send, not cancel, send. What does it say? It says, true star is some sequence of points in time. Then there is a send. Then in the next cycle, it's not canceled. Then it must be sent in the next form. So once you do a send, you have one cycle to cancel it. But if you didn't do it, it must be sent. So because we had all these results already about ETL, which is a temporal logic with automata, it was very easy to prove that RE-LTL is equivalent to MSO. It's fully omega-regular, omega-regular expression. And so four-spec delineation came out, is really RE-LTL. And all the rest is just syntactical sugar about all kind of features that designers need to have. But theoretically, all the others are not very interesting. The interesting thing was putting dynamic modalities, adding them to LTL. Now, at this point, around 2000, the electronic zone to machine industry decides that they need to take model checking seriously. And to do that, they need to have a standard property language. They realize that not having a property language means every company has its own language. It's a mess. They need to have developed a standard. So they establish a standard committee. And they start the process in 2000. And the committee gets together, they sit around a table, and it turns out they have no idea how to design a language. So they say, what we should do is we should seek donation. So they do call for donation. They ask a company to propose languages. And four companies propose languages, IBM, Intel, Motorola, and Verisity. Verisity doesn't exist anymore. It was bought by Cadence, which still is an EDA company. In each one of them, they propose its own home language as the candidate for the standard. Out of these three languages, one IBM is branching time. All the rest are linear time. So the first item of business is to decide do we do branching time or linear time. So now you have a standards committee, mostly composed of engineers, and they are debating. Do we have branching in time or branching off time? So this was a bit surrealistic. And it ended up, like many other things, with kind of a split victory. IBM had a political victory. Their language was selected. So sometimes you see a reference to PSL sugar. PSL is just acronym, properties, specification, language. But really Intel won intellectually or technically, because PSL is all the components that force picket, LKL, regular expression, clocks, and resets. All the elements that existed in force pick were selected. IBM had to change from branching time to linear time. There are some what they call branch time extension, but they are not really serious part of PSL. And eventually another standard was developed, SVA, which is, again, based on the same very idea of using regular expression in over linear time. And that has given tremendous push to model checking, because every company today, and every EDA company today, has a model checker. And they all use basically the standard languages today. Now remember, at some point we added the past. What about the past? So neither of these languages includes the past in a serious way. And for two reasons. One is that it makes it difficult to, the algorithm, still struggle with dealing with the past. Even though theoretically we know how to do it, it's still a struggle for the algorithmic. But more seriously, it turns out when you do model checking, the past is not very important. And why is that? Because if you want to talk about something that happened in the past, and if it's really relevant to the circuit, then the circuit need to remember it. And how will the circuit remember it? There will be some register that will remember the events of the past. And so every property of the past really turns to be some property about the present state of the circuit. But if you don't have a circuit yet, and you're just trying writing the original specification, then we saw before. We saw what Pnueli and his students said, that the past is very convenient. You want to say, if a match is received, it must have been sent before. And you don't have a register to look around you to say whether the message is remembered or not. So in fact, other people have now been proposing, let's go back and add the past. Let's take these languages and add the past. So two teams of people independently propose something called regular temporal logic, where you start from past LTL, and you add the dynamic modalities. But for every such modality, you add a dual which look into the past. And they prove again the same theorem. It's equivalent to RELTL. It's more succinct, but satisfiability is still the best complete. It's not more expensive computation. But it turns out that you don't really need to do all these additions. There is a very minimal addition that you need to do. So first of all, let's get rid of the temporal connectives. Once you have dynamic modalities, you don't need everything else. For example, you want to say always q, just a box, true star q. Go some point in the distance, true, q must be true. So you can get rid of the temporal connectives. You only need dynamic modalities. To get the past, you need only one very simple extension. You need the concept of a letter that goes backwards instead of going forward. For every letter a, normally a letter a means consume a and go forward. A minus would mean consume a and go backwards. And if you're familiar, for example, PDL, had this concept of converse, which is again going backward, which you need for weakest precondition, you need to be able to go backward. If you're familiar with x-path, which is a web language, it has a concept of, again, of backward navigation. So we can say now something like this. Box, true star, receive. So if you find you go to a point in receive, then diamond, it's possible to go backwards using true minus star and see sent. This is how you say every sent must be perceived. Every received must be perceived by a sent. And again, you can prove that you get a logic which is equivalent to RE-LTL. So it's fully regular. It's exponentially more succinct than RE-LTL. And satisfiability is still peaceful. So box means always. Box means always. Box true star means whenever you execute true star and then you execute receive. For all interpretations, the diamond means there is some interpretation. Box means all, diamond means some. So LTL is dead, long live LDL, linear dynamic logic. What was so important about PLTL? The semantic was very simple, linear time. The syntax is very simple. It has exponential compilation property, which we saw is critical to have optimal algorithms. And it's equivalent to first-order logic. Now look at LDL. It's linear time. The syntax is as simple as it could be, just regular expression with the reverse addition. It has exponential compilation property. And it's equivalent to Emerson. Not only that, but in American English, the T and the D are very close to each other. If I say Adam, is it the husband of Eve or is it this little particle? Sounds the same in American English. That means if you speak American English, which I don't, but in American English, LTL and LDL, they sound exactly the same. So you don't even have to change the way you pronounce it. Thank you very much. Crystal clear. No, LDL is not, but LDL in some sense is the standards. So LDL is incomparable to any of the standards now, because it has the past. And the current standards do not have the past. They have only very limited past. It means that you cannot go back in a limited way to the past. You can only go fix number of cycles to the past, but you cannot go unlimited. But on the other hand, if you ignore the issue of the past, LDL is the core of all these other standards. I mean, my job is, I mean, there is an effort of doing standards. But as an academia, we should focus on cleaner languages, which are the core. I mean, the real standards are very big, messy languages. There are, you know, it's what I said. The camel is an animal designed by a committee. So there are people who say, I need this feature. And in that feature, there are big debate, and you end up throwing all this feature in. And I think in academia, we should go to the semantic core of the language. So the core of the language is really, think of it, as PDL over traces. And it takes converse PDL, interpret it over traces. That's all the simple. And it has all the expressive power that you need. So we might as well, if we think about complexity, that's the logic we should focus on. And again, in fact, if you find academic papers and talk about PSL, they're not handling all the complexity of PSL. I know, I worked on PSL. PSL is all kind of pitfalls because of people demanding all kind of features. Even people who really talk about PSL, they're really talking about RE-LTL. LTL is regular expression. And even there, we have a mixture of temporal connectives and regular expression. But the core of this is just we should have taken, and I kick myself, because all the ideas for this existed in 1990. And it took almost like 25 years to come up with these ideas. And we could have had to realize it in 1990. The right way to do this, take dynamic modalities, interpret them over linear time. So how do industry, in particular, because SVA is really accounting for system verilog assertion? So SVA is part of the system verilog standard. So SVA is an innocent standard. PSL is an innocent standard. These are both IEEE standards. Now, software industry, they are less well organized, so to speak. So I do not know that they have, in some sense, PSL is, SVA is very specific to system verilog. PSL was meant to be more general. And they have, if you look at the PSL manual, they talk about how you can adapt it also to software. But it's not clear to me that people in the software industry are really using PSL. I have not seen, for example, people use really PSL. The real place to look would be not in the software industry, but in telecommunication. There are people who do model checking and communication protocols. I don't know exactly what they're using. That's a good question. But again, if you look at the product, the product that come from the EDA industry, from cadence, from synopsis, from mentor, usually they would support SVA and PSL. So the difficult part, and this was a battle that actually I lost, I argue that putting automata in the logic is the right thing to do. Because if you want to model in the logic, you want to model a financed machine, it's very natural to do it using put a financed machine rather than to write the regular expression that correspond to it. But that's the battle that was that I lost. But in some sense, to me now, it's just the difference is that it's a syntactical difference. Regular expression and financed machine, we know that they're equivalent. So now it's just a matter of what syntax do people choose to use. For some reason, people are very comfortable with regular expression because everybody knows parallel and we think so. And they're less comfortable with financed machine. But what you need is the ability to model financed machine. Because you have many cases you want to write a little controller and it's very easy to model a financed machine and it's convoluted to model it using regular expression. No, no, everything infinite works. Everything infinite works. So the system is a whole other thing. Because now you get to the issue of what is the trace. And that is, I would say, it's a less of a success story, I think, of formal methods. Because people try to say, well, of course, I can look at some kind of notion of global time and still look at traces. But really, I have infinite events. So I want to have what's called trace theory. I use the word trace in the sense of simulation trace. But the whole concept of trace theory, which allow for independent event. And so people want to develop what's called trace LTL. But it turns out, and the idea would be that we will avoid a state-based explosion that it relieving forces us to have. And instead, we will have a logic that isn't directly about unordered trace, partially ordered traces. This has not been a success story. All this logic there are to be, because the concept is very rich now, because now we have a concept that really, instead of having a line, which is a very simple concept, we have a much richer concept of something that combines a line and combined partial order. And I have not directly worked in this area, but you can look that almost everything about this is a little bit what happened with real time. No nice idea, but then complexity kills us. The impediment to take transfer is that I think that the theory that we have is not very practical. Remember the difference between theory and practice? And theory are not very different, but in practice, they are very different. Yes, Krishna? No, I think that, but before we do time version of LDL, we should figure out the right time version of LTL. And then we should go to LDL and do the time version of LDL. No, there is that, I mean, people have done, there are some attempts to look at regular expression. Yes, there have been some work. It's not quite true. People have done some work on trying to do regular expression, not so much in the concept of LDL, but more in the concept of even timed automata. So we have the automata and we have regular expression. And people ask the question, OK, you have timed automata into some notion of timed regular expression. So yes, there is some work along this line. Yeah, yeah, but partly you have to look at the context of where people are using this is, I mean, they are part of the industry that use just very much synchronous clocks and it's more about synchronizing events, less about issue of time. Now, of course, you can go to other part of the industry where time delays are very important and it's not enough to count cycle. You really want to use this. You know, I mean, that part has been somewhat less successful because as we know, we thought the theory is more difficult. So people basically, if they use, they just use time, people, if they use anything, use time automata in very little terms of specification languages. No, first, remember LDL is equivalent to first-order logic. Ah, I see. So yes, so we're really dealing here with every time point in something just for positional. And of course, you can say, well, every time point, I have a full structure. So in the context of, for example, if I want to reason about temporal databases. So I have a database and database involved in the different states. And so for each state, I need a whole machinery just to describe the database. And then things unfold also in time. So yes, people have worked on this. Again, the part of the problem is if it's a very rich formalism, and very quickly, you get undecidability. So part of the question was, how do you limit the formalism? Yes, there are some working on temporal integrity constraints, for example. You have a database, but you don't only have static integrity constraint, but you have temporal integrity constraints. And so people have asked questions, how do you monitor them, for example? How do you monitor at any given point in time that the history of database satisfies the constraints without keeping the full history? So you want somehow to compress the whole history into a snapshot. Yes, there is a lot of work in that. Well, the answer is we don't know. Because now to answer this question, there will have to be a lot of practical experience. In LDL, we don't have the practical experience yet. What we do know is that this exponential compilation rarely happens. What it is, it's just like, take, we know that, for example, that SQL core evaluation is PSPACE complete. So when I teach a database class, I tell the students, by the end of the course, you will know how to bring any database engine to its knees. I will teach you that. And they get excited by it. They're going to break. And so it's very easy to write queries that are extremely difficult to, I mean, in fact, it's trivial to write queries that no database system can handle. But these are not queries that happen in practice. The same thing, if you know how the algorithm works, you can write LTL properties. And they will blow up any model check. But in practice, we don't see them. In practice, people write fairly simple properties. And we rarely see this exponential blow up. But I said rarely. It doesn't mean that we never see it. I mean, I've seen in some, we took some properties. And we took some properties that we found in the people have actually tried to specify some protocol. And we said, oh, great. This is a real example. Let's feed it to our translator. And the translator just could not handle it. And actually, it was funny because then we wrote a special script. And we only needed an automaton with about 1,000 states. Not humongous, right? It's a graph. What is an automaton? It's a graph. It's a graph with 1,000 states. I didn't think it was very large. But the translator just could not handle it. Because somehow intermediate stages, it would blow up. So this is still, I mean, the issue of how to translate effectively from LTL to automata, by now there are many, many, many dozens of papers. And they're all about, we know the basic algorithm. And the rest is about heuristics and experimenting and see how it works in practice. Very heuristic at this point. Thank you very much.