 OK, howdy. So today, I want to look at some of the things from logic programming that might be useful in the functional programming world. And maybe we'll find some ways that you can improve your own practice. And this is not going to be a sales talk for Prolog. And it's not going to be a talk about how Prolog is better than your language. Instead, I'm going to invite you to question your language. I'm going to be talking about a lot of things unless you're a language implementer. You may not immediately, it may not be immediately obvious that you have any control over. But as anybody who's suffered through some new programmer to F.P. knows, you can write Java in closure and you can write Java in Haskell. And I assure you, you can write logic-based code in your functional language. So I invite you to think of this in terms of how can you make your own practice better in the language you're already in, not should I jump to Prolog. There's a caveat to this. I'm a Prolog programmer. I know some closure and a few other bits of pieces of functional languages, but I'm not an expert in functional languages. All right, logic programming. How many people here were in my workshop? So I apologize to you people because there may be a little repetition here. Logic programming involves a world of facts and implications. If A is true, then B is true. And then you submit a query and then it gives you proofs of that query. And we're going to have to talk a little bit about logic programming just as kind of setup for this. So I'm going to do a few minutes of that. And then we'll really get into part of the talk. OK, seventh standard algebra. You have variables and proofs, right? We all had variables. Sometimes they're called unknowns, like x. x is an unknown, and this is the quadratic formula. And they're called unknowns because they represent things that we know have a value, but we don't know what that value is, right? And anybody know anything a lot about the quadratic occasion? What is true about x? Remember, there might be how many values for x? Two or one or none, depending on A, B, and C. So in prologue, we call these things variables. And usually proving something involves finding values of these variables that make two things be the same. That's usually what you end up doing when you're proving. OK, and often we're trying to prove the two things are the same. Now, one can never be the same as two, right? Those two are not the same. One is the same as one. But how about here? They could be the same, OK? How about this last one? Well, we don't know what their value is, but we now know they're the same, OK? So, and this is important because only if these two can be the same should we keep exploring this path. If we've reached the one equal to, you know, one not equal to situation, we're done, right? We can't make them the same. Now, but if we could try x being one, then we could try that solution. And that allows us to make some kind of a search through the space looking for solutions, OK? OK, let's go back to this thing of like, I've got two things that I don't know. They're both going to be the same. That's all we know about them. We call this sharing, OK? So these two are sharing. This whole process of saying, could these two be the same is called unification when it's possible. Not prove, but just possible. The two things are the same. Then we say it is possible to unify them. And now we can come here. We can do what we really came here to do, which is to look at logic, at functional programming through the lens of logic program. And I want to pick one particular area because this would otherwise be like eight hour talk. So I picked binding arguments, yeah? It's something that's very fundamental. And if you're going to have encapsulation, you're going to have it. And that's binding arguments during function calls or whatever your variation of functions is. OK, let's look at how binding happens in most languages. This is something called applicative binding. Binding in calls always has actual arguments and formal arguments. Well, OK, the fourth guy gets to close his ears up. But for the rest of us, we have actual arguments and formal arguments. And what goes on between the actual arguments and the formal arguments is binding. So here's a C language function, call, and the thing it's calling. Now what happens? Well, first we evaluate each of these. We get three, four, excuse me, we get two, three, and five. We then assign two, three, and five to A, B, and C. And then we make our call. That's applicative binding. OK, what happens here? Anybody want to guess what this prints? I hear one, two, three. Do I hear anything else? Two, three, four, undefined, you get a type error. Well, I ran this through GCC on my machine. And on my machine, this is what it printed. So this is applicative binding. And we're all familiar with it. Evaluate the args and pass the results. Here's a closure macro. Macros substitute their un-evaluated arguments in where the formal arguments appear in the body. This is so-called normal binding. Why is this useful? Why have this? Anybody want to explain why this is useful? Isn't it convenient to have it evaluate the arguments? Nobody? Way in the back, yes. You can do your own control structures. In fact, you can do more than control structures. You can basically define your own language in there. You can do something with the arguments besides just use their value. Here, for example, I'm just printing them out. So that's pretty neat. It makes your thing that looks like a function call into a little DSL. So that's applicative and normal binding. Are those the only options? Any other forms of binding? Well, if we have unification, they aren't. Let's say part of how our language works is the terms in the formal argument and terms in the actual argument and terms in the formal argument bind. And this is like a made-up language, right? That they're going to bind by unifying. Well, unifying is non-directional. So if I unify x and b, they're going to share. Or they're going to be the same thing. But it's not like it's important which one's on the left side. So if we have unification, we can put a term in the actual argument and a variable in the formal argument and pass information in. Or we can bind the formal argument and sense it shares with the actual argument. They're going to have to be the same. Now, so x has got to be b. b, we're forcing to be 4 inside. And hey, that means we're forcing x to be 4 up in the caller. And we just returned the information. Why would we want to do that? Don't you just want to return one thing from your function? Like, what's wrong with these people? They want to return two things from their function or multiple things from their function. Same language like Python would never want to do this. Of course, Python doesn't. OK, here's some prologue. And here's prologue passing in arguments. So far, unsurprising. And here is prologue passing in a and b and passing c back out. So I'm going to set c to 3 in the body. And x will end up 3 in the caller. We don't really need c. Let's just put a 3 in the formal arguments. Am I in the right place? Put a 1 in the formal argument. 1. I'm sorry, I revised it and didn't fix my note. So I'm going to put a 1 in the formal argument. And x should now be 1. My apologies. We now have this crazy thing. And it's getting a little disturbing, because I've now got a formal argument that isn't even a variable. Yeah? And that's a little disturbing. If you're from a language, how many of you work in a language that has these structuring? OK, well, this may not be quite as disturbing if you're from such a language, which is, frankly, logic languages influencing functional languages. I'm fairly certain that, in fact, that Rich Hickey got that from logic languages when he started doing destructuring. OK, this is a prologue record. And there's not much to them. They're just a little data structure. So this is a way of hauling things around together as a unit. So this is a 2D point. And here, I'm going to pass in a point structure. And fine, I'll just pass in the point structure. Yeah? What I'm going to put in the formal argument is this structure so that I can do basically some destructuring. OK? Yeah? Everybody happy? How many people are lost? How many people are not lost? How many people didn't raise their hand? So well, OK, so that's fairly straightforward. Here, I'm going to pass the x-coordinate out instead of in. OK? So now, wait a second. This is a little bit crazy, because now this third argument is partially out and partially in. OK, well, we're still just going out and in. I've kind of accepted that. All right, well, how about this one? What do you think happens when you do this? People who are in my workshop shut up. Everybody else. You think it'll throw an exception? Well, it can be one. So it's not going to unify. Right, you were in the workshop. You're not allowed to do that. OK, so at this point, now, so yeah, it's not going to be able to do. In fact, in prologue, what it does is it does something called fail, which is a kind of, I'm in trouble and you must tell me something else to do. A little bit like an exception, except where it's a little more gentle. Now, here, I've fixed the problem. In prologue, I'm allowed to provide more than one definition. And if one of them fails, it'll go on to the next one. If it can't unify, it only picks the one that it'll unify with. OK, anybody notice something weird we're doing here? Yeah, yeah, we're doing class dispatch on the value of the third argument. Excuse me, we're doing value dispatch on the third argument. Yes, so what happens if I do this? Oh, heck, it matches both of them. Well, what it does is it's going to give me one of them. And remember, I said, oh, it'll fail. And then like, you have to do something later. Well, what you do later is you find the last one of these you've found, so-called choice point, and you run it again with the next option. You then redo the computation. So we've unwound, basically the computation, we've unwound the stack, and undone all of the unification that's happened since this choice point. And that's not consistent. OK, all right, thanks. That's 30 minutes in, or left? OK, because my machine's telling me much less than that. That's going to be exciting. So it's going to back up, and it's going to do that again. It's going to go back up. It's going to try the next one. It's going to redo the computation from that point forward with the new value. OK, and what happens here? We still don't have a match. OK, so we're still going to fail. So we will back up to whatever the hex above us, that previous choice point, we'll re-run our computation with some other value, hoping to find some solution. We just implemented depth first search for a proof. And we now have a way of effectively making loops, because we can do selectors and recursion. We have a term in complete language, woohoo. If you like the first result, you can take it. If it causes problems further on, you undo the computation, you back up, you try the second alternative. You run out of choices. If there's no choice, no more choices, that's our definition of causes problems later on, and we back up further. And we'll back up either all the way to the top, in which case we just say false. How many people have heard the Pro-LUG programmer joke? How many Pro-LUG programmers change the light bulb? False. Wait, we were talking about argument binding. And now we're talking about control flow. Wait, so now argument binding is a control structure? It's a control structure. Wait, why would I want this? What good is it? So, OK, here's a control structure in a method in Java. And it's OK, half the world runs on this stuff. But can you introspect this? At runtime, what can you find out about this system? You can call it and get a value. You can actually call the introspection, the reflection stuff, and get the name, and the args, and the types out, and get the return type. But can you find out, for example, that it does a right line? Pretty much that thing's an opaque blob. OK, how about closure? Is it better? You can certainly get the function body. But what can you do with it? Anybody here actually introspected than function body and production code? You have the APL guy in scheme. It's more doable in scheme, but I bet you it hurt. Because if you're going to do that, you need to understand completely how the control flow works, which means you have to understand if and con, and the macro that the new kid put in, and the macro from the code that nobody's looked at since 93. And yeah, it's hard, right? So it's not much better. You'll be able to actually get the thing unless you get it in a way you can understand. But this backtracking scheme is prologue's only control structure, which means function bodies are pretty much completely opaque, or completely transparent. And in fact, it's pretty normal. I've introspected function bodies hundreds of times. I'm now going to do some live coding during the keynote. I am brave, not really going to do live coding. I'm going to have to figure out how to get this thing out of slideshow mode, OK? And then, woo-hoo, OK. So OK, NTH0, it's 0 because you can do it 0-based or 1-based. Sometimes it'd be nice if we had arrays, but we don't have arrays in prologue. We have lists. And so if we want an array, but we can treat it as an array, right, with NTH0. And NTH0 takes the index I want to get, and a list, and then it'll, you know, this is the element. And then it does something weird. This looks like something like a beginning programmer would write, right, because it's got, it also will give you the list without that element. Now, and forgive me, I got to turn around because I can't. Otherwise, can't see what I'm doing. And now, so OK, this is unsurprising. It gets element C, all right? And it gets ABD, the list. Fine, but I've got this crazy in and out thing going. What this means is I can also do some of these other modes that is I can make N unbound and bind the element. And what do you think this is going to do? Actually, what it's going to do is it's going to give me the location of all of the Bs. See it ran out, it's at fault. Or I could make both element and N be unbound. And now, what do I get? OK, I get 0A, 1B, 2C. It's just going to enumerate them. I'm going to get rid of that. Hang on, I'm going to close these so that I get back on this. Where was I? OK, now, what's this one going to do? Holy crap, I didn't bind the list. Well, it turns out this actually does something fairly sane in our world. It gives me a list with an A in the second element. And these things are unbound variables. And it had to generate variables, so it just gives them numbers, right? So notice it also gives me right. OK, and I can figure out what I removed from one list to make another. And it turns out I removed the C. Or I can figure out what lists I can make by inserting an A into the list, x, y, z. So it'll put it in all the possible positions, thank you. OK, and the truth is, I only picked this many of them to show. There's about 18 methods that actually make sense. 18 things that make some sort of sense out of this one predicate. What's that get us? Anybody? Yeah, well, you're right. This kind of pattern matching and multi-modality really, really reduces the size of your API. You've seen, we've got all these different functions that have to do with arrays, many of which might not be in a standard library just to avoid bloating the library. But they're here, and they don't take up any space. When you start a newly installed 7718 SWE prolog on my machine, you get 2,231 predicates available. That number is incredibly small compared to the number that say you get with a fresh install of Java. Yet, prolog has a philosophy of being, as we say, batteries included. You're getting the UDP libraries. You're getting the TICP libraries. You can do Google protocols. You are getting the pack installer. You are getting the library that reads compressed archives. You're doing a web framework, all that jazz. We figure basically disk space is cheap these days. And when you do a prolog install, you get most everything. So doing this really reduces your API count. When you reduce your API count, you reduce your testing burden. You reduce your surface area for bugs. You reduce the learning curve. I bet you I have more than 95% of the SWE prolog library memorized. I'll admit, I'm pretty dedicated to prolog, but memorizing 2,000, getting to where you know 2,000 things is going to take you a lot less time than the 30,000 or something that are in Java, right? OK, so this much smaller API, you can see, is going to reduce a lot of things, right? And so I hope I've given you something to think about. And you can start questioning exactly what your language does and why it does it and how it does it. And I hope you've enjoyed this. And I'll thank you. I'm a little bit short, but that's always good. Questions? Question. Yes, question. What happens if you run the example with the unbounded list one more time? Does it give you false? With the unbounded list. Right. Oh, if I run it again? Well, if I run it again, I'm going to restart the whole process. So it'll just generate some new random unbounded variables for you? Yes. Yeah, it'll make a new list. So it'll be never-ending. Aaron. So one of the classic complaints about these sorts of control structures is that you can't reason about the cost model. But you've had a lot of experience working with Prolog. And Prolog has an explicit search pattern, a search algorithm for accomplishing this stuff. And I think in the workshop, you also talked about tail call optimization. So can you speak a little bit about your experiences with understanding the predictability of the performance of your code? Yeah, I'll admit it's more likely to bite you. And I've programmed some CELF, C-E-L-F, which is a linear logic language, which has, instead of taking the options from top to bottom, takes them randomly. Actually calls a random number generator and picks one. And I will tell you that finding the cost model of that is much harder. But for experienced Prolog programmers, yeah, it's generally you know what order things are going to be done in. The truth is that just as functional programmers, in theory, are just applying a function and don't have to worry about execution order, how many of you do that in production? Like with most of your code, because like most of the time you're more worried about it than that. OK, well, I knew I was going to get trouble. I saw their APL, I was like, oh, no. So but for most of you, most of the time, even if you're in a functional language, you're still have a pretty good sense of what the imperative order is. That's not the advantage of a functional language. The advantage is that not needing to know the order produces nice mathematical properties. Yes, sir? So this is sort of mind blowing when you look from an imperative perspective. I've been trying to find where this is being used commercially for disproportionate results. So Prolog is quite powerful, which is obvious. But it doesn't seem that there are many commercial applications that are known in the mainstream. It's true. There are a few, and quite frankly, one thing that happens when you have a language that has a lot of power, especially a lot of algorithmic power. It tends to gravitate toward fields that are arms races for algorithmic power. And those people, the last thing I want to do is give their competitors a heads up. So you find Prolog in fields like high speed trading and applications that are run by agencies of my and the Russian federations governments that they don't like to talk about. We routinely get contacts from people who are sort of like, I'm using Prolog. If you tell anybody, I'll kill you. So that's kind of one aspect. Another is just that it's, in a sense, not cool. For what it's worth, gate assignments. When you get on a flight, there's a good reason why they won't tell you where the gate, what gate you're coming in at until partway through the flight or something. That's because they haven't assigned it. That's why they're all like uniform. And this is a British Airways gate. An hour later, it's a Delta gate. That is done dynamically by a dynamic planning algorithm. Organizations like DHL and FedEx run large planners. FedEx figures out every night what flights to fly to get all of its boxes where they have to go. They don't have a preset route. That's part of their magic that makes their system work. And that gets pretty hairy. All of those things are often done in logic languages. Otherwise, the other thing that happens is, frankly, just nobody's on our VM. So we end up with things like Core.Logic, where people say, yeah, we're a closure shop. But underneath, they've got Core.Logic. Gaming engines? Not that I know of. Many people will try to build gaming engines in gambling or gaming engines in video games. No. No. I've worked in the video game industry. Can you stop the search if you found what you're looking for? Can you tell the prologue that you don't need to search for more stuff? Yes, you can. Good question. Yeah, that's something called the cut, and it's a very important part of it, is being able to say, OK, I'm done. You could also just, when you get back, just don't call again. But yeah, saying, OK, I'm going to get rid of all those choice points I don't need anymore is an important part of prologue. It has funny mathematical consequences. Prologue, as long as you have no side effects in your code, you should be able to reorder the clauses. But if you put a cut in that acts like a side effect, it means you can no longer reason about that. But that's kind of a theoretical thing anyway, because like, for example, if you're doing left recursion, it's really important that you have the base case first. Otherwise, your runtime goes to infinity. Any more questions? Over here? No? OK. All right, thank you, Annie.