 Okay, so I'm Steve Miner. Our topic today is challenges for logic programming. And this is gonna be a little bit philosophical, no code involved. So we'll ease into Saturday morning. I hope that works out okay for you. So first, any talk about logic is gonna have a prologue. So opening a little bit of a time. I don't know, maybe it's too early for that. So last year of the conge, there's a lot of enthusiasm about logic programming. Dan Friedman and William Byrd kind of stole the show here with an un-session about Minnie Canron. And then they were back again this year and put on another great show. I love watching those guys. They do a great job and they've done fantastic work. Ambrose Boner Sargent gave us an introduction to core logic. And then he came back again this year and he showed us some work that he's based on his understanding of core logic and added the typed closure. And this was all based on our core logic library that was a port by David Nolan. He's done fantastic work. He gave another talk last night at the un-session that really was worth hearing about some of the new stuff that he's adding to core logic, especially the constraints. I think that's really important. So he's done great work and I really encourage you guys to follow what he's doing and talk to him about that. Also, Jim Dewey, I think was my first introduction to Minnie Canron when he talked about it on blog posts maybe a couple of years ago. So it's been a while and he's also done some recent work on adding the fork join support core logic. So there's a lot of great things happening around logic programming for the Clojure community, a lot of enthusiasm, but there's also some history that I wanna go over in the old days of logic programming. When I first was introduced to it, the enthusiasm here kind of reminded me of how I felt way back then. So we'll try to bring that all together. So here's an outline for the talk. I'm gonna go over some of my personal experience, how I came to study logic programming. We'll have some try to bind it, bring it into the historical narrative about some of the history. There's always some boring filler in the middle that maybe you don't have to worry about. Then like any good show, we don't have a dramatic conflict and we'll bring in some of the critics about logic programming. I think we have to kind of face some of that and see if we can address some of the difficulties that people have had with logic programming. Then we have an unsatisfactory ending. That's just the way it goes in the real world here. So by challenge, since this is a Clojure talk, I think we always have to start with a definition. By challenge, what I mean, I'll take the dictionary definition, objection or query as to the truth of something, often with an implicit demand for proof. And the big challenge here is why should you learn logic programming? Okay, that's another burden on programmers. Do you really need it? What's it good for? And I like the idea of saying a challenge. So we're trying to say logic programming has to prove itself to us. So the proof and logic kind of go together. All right. Okay, so let's step back. 25 years ago, I was a young list programmer and kind of at the top of the world, I loved CommonList, but I was all in on that, working on expert systems and planning systems. And really at the time I felt that's all I need. CommonList was great. But I had a boss who was kind of an expert in prologue and he introduced me to prologue. And it was not something I necessarily wanted to get into right away, but I figured it was worth learning. A lot of us here kind of collect languages, try to see what other people are thinking about and it can expand your worldview if you learn some other languages. So all right, so I was in a learned prologue. Now I had a really good example. Most of us getting into logic programming, only have some toy examples. While I was working at SRI, there was an Alps project. I hadn't worked on it, but my boss had been in charge of this. And it was an expert system that did automated air load planning. So it knew how to load big cargo planes for the Army. And it was a classic expert system where they had spent a lot of time with the load masters who had, the humans who had done this planning in the past. And they put together a system that really delivered and could load planes. The cargo load masters were happy with it. It was a fielded system and the Army was using that. So I had this example. This is a real serious system that works. And a lot of us approach logic programming. We haven't seen anything like that. So I wanna tell you that people have done some great things. This was also done by a small team of prologue programmers at SRI. There was really only one prologue expert at the beginning and a handful of recent college graduates who put this together. So it was another sign that maybe there was something to this logic programming. And I was gonna learn it and I had, fortunately, some really good people around me that already knew about logic programming. But still, oops, let me go back, I gave it away. All right, this I've had a little too touchy. So as I say, there's a cultural challenge. Okay, when you're learning logic programming, I was coming from Common Lisp. And I was maybe, I don't know, a bit cocky, a bit full of myself where I thought, what do you need beyond Common Lisp? S expressions, I have macros, C loss or the Common Lisp object system. If you haven't seen that, you haven't really appreciated object programming. I know in this community we criticize object programming, but take a look at what the Lisp world had with the Common Lisp object system. There's a lot of amazing technology in there. So I was all in, that's how I learned object programming through a Lisp. And that's a little different than starting with Java. So there's a lot to that. So anyway, I have full control of my world. That's what a Common Lisp programmer, you own everything and you can do whatever you want. Prologue was a little different for me getting into that. It has strange syntax. I mean, we can get past that pretty quickly, but it was different and it was like, why do they do something different? Haven't they seen Lisp? Some of them had, some of them hadn't. Then unification and backtracking. And we look at that and I think it's often, Lisp programmers look at something where somebody's done something and say, that's pretty cool. They say, I could have done that. I could do that in Lisp. So what does it matter? And really, I had done logic programming already in Lisp. There are several popular Lisp systems that could do logic programming. But Prologue, I think, was something special. There was something interesting to it where they had refined it and it was simplified down to what really mattered. And it was worth taking another look at the world of programming through that lens of logic programming. So I say as a cultural challenge, it reminded me of Dance with Wolves. Kevin Costar was a Civil War Union soldier who got connected with I think a Sioux Indian tribe. He came to appreciate, the Indians seemed like a more primitive culture, but he came to appreciate that maybe, even though they were simpler and not as sophisticated as the Americans, maybe they were in touch with some deeper truth. Something important about life. Or for a more modern take on that, there's Avatar, but it's a similar kind of idea. So that's how I felt going into the Prologue world. I felt like I was leaving a lot of comfort, a lot of my sophisticated tools behind, and I was learning a more primitive way of looking at things. But then after a while, you accept that there is something special to getting down to the logic. So my logic programming, what I'm talking about, my history there was with Prologue, but really we can think of logic programming a little broader, any kind of programming system that's inspired by logic. And typically we have facts, rules, and queries that we can deal with. And in particular, the rules define relationships between objects and our computation is gonna be done by deduction. And that last thing is, I think the hardest part to get a good feel for, we don't have a lot of control over what's gonna happen. We're trying to just state our problems and ask our queries and let the system figure everything out for us. So that takes a while just to let go and you have to do more work up front to express yourself. So now to try to say a little bit more about logic programming, I have an analogy here. A few weeks back, I had the privilege of meeting this fellow, Pearl Friar. Now he has three acres around his home of Topiary that he's been doing for close to 30 years. You may have seen, he's been on PBS, they made a documentary movie about him, a man named Pearl. Great character, really interesting guy, if you've ever had a chance, he lives in Bishopville, South Carolina. It's not far from I-20. If you ever have a chance to stop by, you can look him up on the web, he has a website. He says his home is open if you wanna come and look at his garden anytime. And if he's out in the garden, he'll talk to you. I spent, my wife and I, my wife's a gardener and she knew all about him. I didn't know too much about him, but we spent an hour just walking around, he was talking to us and he's very philosophical about creativity. Told us how he does his topiary. And this was very, the reason I bring this up is because he just has, he says there's just two simple processes he has. He has a technique, you can bind, so you can pull the branches together and tie them together, bind them together so they grow together a certain way. And he can prune, so he cuts things. And from that, just those two simple operations, he creates these wonderful topiaries. And there's a lot of creativity, he has a lot of, of course it takes a lot of discipline and patience to make this work. Some of his topiaries take a couple of years to grow into the form that he wants. But the idea of just those two simple operations, that's all he has to create all this wonderful stuff, that to me was the connection to logic programming. It's very simple. It's at the base, but you have to spend some time and have a little bit of vision about to see what can come out of your program. And he has this great creative flair where he can take just the standard bush and create something new and different. We asked him, can you teach how to do this? And he says, it's not really that easy to teach. I can tell you the techniques, but for you to create something, that has to come from your own inner creativity. I think there's some, there's some of that in logic programming where it takes a while to get to the point where you're comfortable creating your own problem description and understanding what's going on where you're not controlling it. We're typically as programmers in control of what's happening. And we're always thinking, okay, we can sequence these kinds of operations and get the result we want. But in logic programming, we have to let things grow naturally a little bit and we can form our statements to get the results we want. But it's a little different way of looking at the world. So when I was learning logic programming, the book I started with, The Art of Prologue, it is a great book. I really highly recommend this if you haven't seen this or if you can find an old copy. The Art of Prologue by Sterling Shapiro from 86. I found this quote when I was getting ready for this talk. It says, in prologue programming, in contrast perhaps to life in general, our goal is to fail as quickly as possible. And that's kind of a fun quote. And there's something to that where we're exploring typically, we're exploring some kind of search space to find an answer to our query. And if something's not gonna pay off, we want to make sure we get out as quickly as possible. This is important, especially in prologue where the ordering of clauses mattered. When you get into core logic, I mean this is to me an interesting idea that we don't want to depend on the ordering of clauses in core logic. The many-canon way of doing things is trying to be more logical and avoiding this kind of procedural artifact. So Shapiro also did a lot of work in concurrent prologues and I was fortunate to have had the opportunity to take a class from Shapiro where he's talking about concurrent prologue. So that, again, I'm not there to talk too much about that today, that there was a lot of kind of mind-opening interpretations of logic programming in the concurrent space and how you can map prologue procedures into its own little process. And we use kind of a pipeline approach so you can have a network of connected processes to do concurrent prologue programming. Shapiro also consulted on the fifth generation project. We're gonna talk more about that in just a little bit. So following on from some of the ideas in the art of prologue, what are the benefits of logic programming? So I'm an advocate for logic programming and I think it's, after learning, going way back, I'm really happy to have a chance now to come back to logic programming. In a sense, Closure's brought me back to Lisp too because the introduction said I've been around doing a lot of different programming languages. I started in Lisp and moved on through small talk and Objective C and Java and now I've come back and I'm really happy to be in a Lisp again in Closure. I think it's done a fantastic job of taking some of the older ideas and presenting them in a new way and mixing things so that you can really bring together all these different aspects of programming and things we've learned and now bringing core logic into that. That's another important tool. So back to the benefits here. The main thing about logic programming is it forces you to have a precise statement of your problem. So really you're doing all your work up front. If you can describe what you're talking about, then the solution can just kind of fall out of that. So Shapiro says spending time on creating a precise statement of your problem can be an intellectually rewarding experience. And so when your professor tells you something is intellectually rewarding, that usually means it could be hard, okay? So I wanna tell people it's okay if logic programming seems hard at first. I think there is something to that. It's different maybe if you haven't had experience with it, but stick with it and maybe you'll get the intellectual reward that Shapiro was talking about. Another aspect, just looking at logic programs, there's a certain elegance to logic and logic programming. That's hard to quantify exactly, but I think when you look, most of us can look at logic, even if we don't understand everything about logic programming, we look at some of these nice solutions and say, wow, that's really great. It's a way of just recognizing simple expressions in the power behind that logic programming. Now that comes partially from the idea that we have declarative semantics, right? Logic, we've been thinking about logic for hundreds of years. We have a sense of what it means to express something in logic. Our experience in mathematics and science and computer science, logic's very important and we accept that as a way to kind of explain ourselves to the rest of the world and it's our common language. So if you have these declarative semantics, you have an idea of what your program really means. But of course it's important to be able to execute that program. So we can take that same program where we are just talking about what's true in the world, what do we know and we can run that program. We can ask queries and get answers. So we have a procedural way of interpreting the program. To me, that's kind of the core idea behind logic programming is that we have this duality of a natural kind of logical interpretation. We understand the meaning of our program, kind of independent of any computer or running it. But then we can also just take that statement of our problem and get results. So that's very powerful. And Shapiro is saying that duality is kind of a key concept throughout all kinds of problem solving. We like to take problems, transform them and say, well, this problem is really like some other problem that we understand or we say this statement can be interpreted in multiple ways. And we're looking for useful dualities in computer science. And logic programming is one of those kind of special cases where we found something where we can take our declarative semantics and run those programs. It's also, once you get something that works in a logic program, it's very flexible. As Dan Friedman likes to say, you can run your program backwards, okay? That's something especially you can't find anywhere else outside of logic programming. Then the last item I have here is verification by proof. For years and years, programmers have wanted to prove that their programs worked. And people put a lot of effort into that. Now it turns out, of course, that's a hard problem. You can't really do that in general. You can't guarantee that your prolog program will terminate. But there are areas you can prove. You can prove some things are true because we have a logical specification. We have a chance to prove. So that's why I say maybe. There are some areas that you just can't handle. So anytime we talk about proof, I'm always reminded of the Don Knuth quote. He sent some solution, I forget what it was, but he sent a solution to a colleague and he had a little note, beware of the bugs in the above code. I've only proved it correct, not tried it. So I think that's something only Don Knuth can say. All right, so now I'm gonna switch gears a little bit and try to bring in a little bit more of the history and what was going on around the time when I was learning logic programming. So this is the cover of a book, The Fifth Generation. And The Fifth Generation was a book subtitled Artificial Intelligence in Japan's Computer Challenge to the World. This was written by Ed Feigenbaum and Pamela McCordick. And in 1983, so a little bit about Ed Feigenbaum background. He's a professor at Stanford. He was founder of the Knowledge Systems Lab. He was a touring award winner for his AI work. He founded some companies and telecorp and technology. He was really well connected in the government industry. So he's, I don't know, very important influential person around AI and computers science in general. So he was one of the authors. So people pay attention to what he writes. I think he was writing this book really aimed at the layman. I think probably aimed at American government officials and he had an idea that they should know something about what's going on in Japan. So The Fifth Generation. When computer scientists hear about anything like The Fifth Generation, what's the first thing you think of? You try, I think you're probably thinking what were the first four generations, right? Okay, so I'm not gonna say right now because I'm hoping Alan Dippert is gonna be having another trivia question later and maybe I can win a book. But you can think about that and we can talk about that later. What were the first four generations? So The Fifth Generation Project, the Japanese government sponsored this project. They have the government department called MIDI or the Ministry for International Trade and Industry. So that helped for Japan that was kind of their way of setting industrial policy. And Japan had been very successful in steel and automobiles and consumer electronics. They'd done amazing work in overtaking America and a lot of respects there. And now they were setting their sights on creating a whole new generation of computers. So this was gonna be based on parallel hardware. They adopted logic programming, so prologue-based software. They were all in on logic programming. And this is really a different way of looking at the world. They said, well, we want natural language interfaced to our computers, right? Japanese didn't work as well on keyboards to say English, so they wanted to get beyond that issue of typing in the computer. We're gonna use natural language. And Fagamon talks about their approach saying we have a knowledge information processing system, or KIPPs. So we're stepping beyond just computing just with numbers. A lot of programmers were just Fortran programmers at the time. We're gonna deal with knowledge. So that still is a little bit of an open term, but the idea is we're gonna leapfrog everything that the rest of the world had done. So this is a diagram. I copied out of his, or took a picture out of his book and it's a little distorted. It might be hard to read. I'll just say at the top was the natural language interaction. They also, of course, can do speech, pictures, basically. Anything you wanted, you can imagine, right? We'll throw that on top. Then kind of the, just below that, the top half of the diagram here is just all the software they're gonna do. Now, to me, the interesting part is they were doing the lower half of this diagram is all supposed to be hardware-based. So they were gonna have a prologue machine. And some kind of hardware that was specially made to handle relational database interaction with their prologue. So this is very ambitious. At the time, we were used to having specialized hardware. So I had a symbolic list machine and there were several different vendors of list machines. So they were custom built for lists. So it's not so strange that they thought they could build, they wanted to build custom hardware for prologue. Now the world has changed. Of course, we don't think about doing custom hardware now for our programming languages, but that's their model back then. And at the bottom, we say, VLSI architecture. So that was still kind of new back then that, okay, we have these big integrated circuits. So they were gonna build fancy hardware and it was gonna be specialized for prologue. So this initiative had mixed reactions around the world. Figenbaum saw the US falling behind what the Japanese were planning to do. I think that's the real reason he wrote this book was try to wake up America that something's happening here. We're having a challenge from Japan and we really should do something else here in America. From reading the book, I get the impression that maybe he talked to IBM and was disappointed that IBM just didn't care about what was going on. IBM was on top of the world then and they didn't see any reason to worry about all this fancy logic programming. They were just worried about shipping their next generation of computers. The list packers I knew were really skeptical and I think this is a natural thing. If you're into one technology, you think you can do anything you need to do in that technology, somebody else says they're gonna jump past you because they have something new, something better. A lot of times the initial reaction is, well, that'll never work, I have what I have and I'm happy with the list. So I don't think prologue's gonna take over the world. But the prologue proponents were really excited about this. This was their chance. They had been working for years, I guess prologue started around 1972, we're now into the 80s and they felt like this was a chance to really validate their take on how to program, how to do computation. So people were excited. There were a lot of leaders in the field, I know Shapiro consulted, Figenbaum consulted with the Japanese. So a lot of the top programmers, prologue programmers had some connection there. But in Shapiro, I'm sorry, in Figenbaum's book about the fifth generation, he said even before the Japanese had announced their project, Europe had been cutting back on funding for computer science and they really weren't in any position to respond to what the Japanese were planning to do. And Figenbaum thought that the West and America in particular needed to do some kind of response. But Europe wasn't gonna do it and some of the top prologue people had actually come to the US and were continuing their research in the US because they didn't get the funding they needed in Europe. But so in the US, DARPA was in charge of most of the funding for computer science research. And what I'll say is DARPA was kind of changing their attitude towards AI and we had the AI winter is coming and maybe it already started to pull back on funding. I didn't really understand this at the time but it did really have an effect on what we were doing kind of in the research community that DARPA wasn't gonna give us as much money as they used to. So I would say as far as I can tell there was not any big coordinated response from the US or Europe addressing what was happening in Japan and the plans for Japan. And I think Figenbaum was disappointed about that. So far as I know, the only concrete steps that were taken, Figenbaum wrote his book, people talked about it a lot and a few people like me decided we should learn prologue just so we know what was going on. So now we're to switch gears a little bit and talk about this paper. It's kind of famous or maybe even infamous generate a lot of discussion called the development of logic programming by Carl Hewitt and I'm talking about a version I found from 2008. The subtitles, what went wrong, what was done about it and what it might mean for the future. So it generated a lot of response and it was critical of what was going on logic programming. Okay, so who's Carl Hewitt? I'm gonna have to speed up here a little bit but he's famous AI programmer for he'd done the planner language back in 69. So that was a procedural embedding of knowledge. He's also famous for developing the actor model of computation. And more recently he's been working on a system called REC logic that I don't really know all the details about. But maybe it'll pay off down the road. He's now a visiting professor at Stanford. So this is a quote from Hewitt talking about prologue. Prologue was basically a subset of planner that restricted programs to causal form using backward chaining and consequently had a simpler, more uniform syntax. That's kind of a backhanded compliment for prologue. It's kind of like what I did but not quite as good. But the prologue people had a response to that, this is a quote from the birth of prologue, another article you can find on the web. The lack of formalization of this language talking about planner, our ignorance of Lisp and above all the fact that we are absolutely devoted to logic meant that this work had little influence on our later research. So I think in the academic world that constitutes a zinger. So Hewitt and the prologue people didn't see the world the same way and really bright people with a different outlook on how to handle logic and programming. So Hewitt, I have to speed up a little bit here. So Hewitt talked about his planner system and said, okay so prologue is maybe a more controlled approach but he had problems there. Now this slide here is, we could spend an hour talking about what went wrong and this is from Hewitt's perspective. He's critical of logic programming. I think it's worth all of us advocates of logic programming to talk a little bit about the criticisms of logic programming. His first item he says, causal form hides underlying structure of information. And to me this is a direct assault on logic. This is saying, okay things that I know are logically equivalent. He's saying that's not really the way humans reason, that's not really the way we think. And that's, I had a bad reaction when I first read that. I said no, no, no, I can't accept that. Don't you know how hard I work to understand logic and now you're throwing it out the window. So that's a difficult one. I would say I want to keep logic. It's important to me and I have to say that the rules of logic still work. I just have to work hard to make sure that I stay within logic. And he goes on to say that practical domains of knowledge are inconsistent. Now this is like saying you can't handle the truth. This is saying real world doesn't admit to being described by logic because we're just not good enough at understanding our problems. And I want to say there's some truth to this. This is maybe we'll call it a pitfall. I'll accept that there's an issue here. But I don't want to give away logic. I think logic is too important. I think we can spend some time engineering and working on our problems to understand and make them work within logic. So we have to accept that there's an issue here but I think we can get past it and I think it's probably important to keep this in mind when we're working on our logic systems. We can't afford to be inconsistent. And the last item was proof by contradiction is not sound rule of inference for inconsistent systems. Oh, we know that we have to be consistent. We can't do anything in logic programming if we're not consistent. So he was saying big systems tend to be inconsistent. People develop different modules, different ways and we throw that all together and maybe it just doesn't work. So I think we have to address that from an engineering point of view. But this is a serious issue and there could be a lot more discussion about that. So my take on what he's getting at is, and I've done this, I've built logic systems and it's like building a house of cards sometimes. It can be brittle, it can be easy to make a mistake and the whole thing collapses. So this is an engineering concern. We have to work hard to avoid our brittle systems and some of that might be avoiding throwing everything in to one huge logical database. I mean, that's I think where you get in trouble because you can accidentally change things in one area that somehow affect other areas. So we'll see, maybe we can be more modular and I think the mini-canon approach helps you maybe be more modular. All right. So now getting back to, this is Hewitt talking about the fifth generation project. So he's using this, I'll just make this fast. He's saying, quoting Robert Kowalski about how great prologue's gonna be because it's powering the fifth generation project and this is a way to prove the unifying role of logic programming. Kowalski says computation can be subsumed by deduction and Pat Hayes quote computation equals control deduction. So Hewitt disagrees, he wants to handle, maybe he's working on a weaker form of logic that he thinks is closer to how humans reason and he says, okay, well, let's take the fifth generation as a test case. Can logic really do everything? Can it be everything for you? Your only programming language is logic programming. And of course, we know that the fifth generation project failed. I have some other reasons for that listed here. But prologue took the blame. And that kind of hurts, right? Cause this was a big system where the Japanese were betting on prologue to power their system. It didn't work. So a lot of people just concluded, well, I don't wanna mess with that logic programming. It didn't work for the Japanese. Why should I bother? So I think we can get past that. There were lots of other reasons that the fifth generation project didn't work. And maybe, let's be honest, maybe logic programming shouldn't be your only programming language. Maybe there's another way to get the benefits of logic programming without being all in on logic. So even though we had a failure for the fifth generation project, there's still a lot of things that logic helped to inspire. And you can run down through this list. Lots of success here in specialized areas. But the last item here, Datalog. Datalog was a simplified prologue. And that has kind of made a comeback. It simplified a guaranteed termination, had some simpler limits on what you could express. But it's been really popular among database people. And we know now here Datomic has adopted Datalog. Michael Fogus gave a talk, a strange loop about the rebirth of Datalog. So I hope someone's live tweeting right now saying, Fogus just got a shout out at the conge, okay? So I'm waiting for, I wasn't at Strange Loop. I haven't seen his video, but I saw his slides. So it's definitely worth taking a look at Datalog. Now Haskell, it's functional programming, but internally it uses functional programming. I'm sorry, it uses logic for the type inference. So I have to move along quickly here. Now the big question was, the challenge was, why should we learn logic programming? And I want to say that Closure can help answer that question, why you should learn it. We have a great open community here. David Nolan's done some really great work bringing core logic to Closure. Functional programming, I want to say, can be a good host. The mini-canron has already shown that. You can do logic within functional programming. So your functional programming, your kind of control and all your common needs for programming can be done in a functional way, but you can have a part of logic interacting with that. And we all know the value of values. The other special thing about Closure is that it runs everywhere plus the browser. So if you're doing research on logic programming and you want the world to adopt some of your ideas, maybe come to Closure and show off all these great things you've done. Also, I think concurrency can be important. So then other ways about how we can meet the challenges. Just awareness, people are getting exposed to logic programming just hearing about it here at the conge and among the Closure community. I think we need to prove the usefulness to engineers. So a lot of the people in this room are just interested in technology and exploring and that's great. So we're gonna try out logic programming, but we have to show kind of the rest of the world here's a big system that works. Okay, and before I had the benefit of seeing some big systems that worked in Prolog. And so toy problems are great for illustrating issues, but we have to get some big solutions too. So people can see that. And Closure adoption, I think Closure is jumping on board with logic programming. That's great. I think maybe we might have to do a little work to make that a smoother transition for people. Then I'll end here just showing you Pearl Friar again because I love this analogy that topiary is kind of like logic programming. So I hope you take a look at that and give logic programming a try. Okay, so that's all I had for today. Thank you. Thank you.