 So, my name is Gavin McKimsey, and I'm going to be telling you a little bit about logic programming today. I just reactivated my Twitter account, so if you want to tweet at me, go ahead, and I'll try to tweet back eventually. You can tweet questions at me or whatnot, too. Send me an email if you want. How many of you have heard of logic programming? Okay. Like, heard the words or got into it a little bit, used a logic programming language, raise your hand if you've used a logic programming language in any depth. Okay. This is mostly going to be an introduction to what logic programming offers, and some discussion about Ruby options and logic programming. And if you haven't heard about it, that's cool, because I hadn't really either up until I started getting ready for this talk. It turns out it's a great way to learn something is to have some pressure on you. Just a little bit about me before we dive into the logic program itself and to try to explain a little bit why I'm interested in this. So the last few years I've been traveling around the country and teaching people how to get ready for the law school admissions test. I used to think I wanted to be a lawyer, and then I found out that there are no happy lawyers, and I decided to move in a different direction. But for the last couple of years, it's been good to wander around and do this sort of thing. If you've been reading the problem here, this is a pretty typical sort of thing to see on the law school admissions test, where you have to figure out a little puzzle like this. We call it a logic game, and the idea here is that they're giving you some variable sets, and they're telling you about some relationships between them, and you have to fill in the gaps. You have to figure out what must be true or what could be true or can't be true using the information that they've given you. Before this, I spent a year playing bridge full-time, and bridge is a great card game. Any bridge players? Game after the session? I brought cards. Let's do it. Bridge is a great game. If you're a programmer, you're probably someone who bridge would appeal to. There's a problem-solving aspect. You're working with incomplete information. You're trying to figure out what's going on at the table. Part of that is this process where you're exchanging information with your partner about what cards you hold, and with your opponents also trying to figure out where exactly everything is and build up a picture of what's happening there at the table. I spent a year playing. I was going around the country playing at different tournaments and ended up being the top-ranked rookie in North America that year. Not great by any means, but a good beginner, and it's the kind of thing that just grabs a hold of you if you like problem-solving, because every five minutes you've got a new little puzzle to solve. I definitely recommend looking into it if that's up your alley. The same kinds of things that get me interested in and stuck on law school admissions test problems and on bridge problems are the things that interest me about programming. Problem-solving especially, and figuring out how to move forward with the information that you've got. My first programming language was Logo, and my first object-oriented programming language was in HyperCard for the Mac back in, I don't know when. Great little things, but I've moved on to bigger and better things since then, mostly Ruby. Ruby is great because it allows us to express problems in clean and clear ways. One of the things that I feel inspired by, I guess, and one of the things that's held my interest in programming in general is that I think of programming as a project where we sit down and kind of just say, hey, computer, here's what's going on, and we get to kind of get what we need out of it. We get the answer that we want. We get the answers to all of our problems, hopefully. When I think about programming, I think about being able to solve all sorts of different problems and being able to create machines that can do fascinating things, that can solve puzzles, or that can model the world, or that can maybe create music, even. And ideally, on some level, when I think about programming, I think about artificial intelligence. I think to some extent, all of us, when we sit down to do this work, we imagine that we're creating something that, at the very least, encodes the intelligence of ourselves and our team and makes that more permanent and gives that to others. Ideally, even further, we're going to be giving problem-solving a whole new tool set to be able to do whatever we need it to. And what we're going to be getting into today with logic programming is putting artificial intelligence in your hands as programmers on some level. Logic programming languages came out of the artificial intelligence efforts in the 60s and 70s, and the approaches that they take are really well suited for a variety of problems that allow us to turn over the work of problem-solving to the computer, to the language. And that's what we're going to be looking at today. So let's talk about logic programming. Logic programming, like I said, involves kind of turning over a lot of the work of problem-solving to the language or to the software. And we typically talk about logic programming languages, either a standalone language, prologue is the one you'd be most likely have heard of, or an embedded language within a host with a DSL to interface with a mini-canron is the most prominent example of that that you might have heard of. Logic programming languages have three, I think, big features. They tend to be declarative. They tend to talk about relations between things. And they tend to, or ideally, the whole point really, is that they allow us to make inferences. They allow us to fill in the gaps and solve puzzles by turning that work over to the computer. Declarative means kind of hiding process. And I want to take a look at a coding exercise that you might run into on exorcism or on one of these other sites. The at-bash cipher is a pretty simple text cipher. We take a string of text and we keep any letters and numbers. We drop punctuation. Letters swap alphabet positions with each other. And it's kind of like zipping up both halves of the alphabet, a and z swap, b and y swap, and so on and so forth. And then we take the whole thing and lowercase it and chunk it into words of five characters each. So RubyConf 2016 becomes, if you can read that, go for it. I'm not going to try. And that's encoded now. One of the things that is so great about Ruby is that it takes a lot of the work of doing this off of our hands. A lot of people talk about being declarative in the Ruby context in terms of function chaining and being able to kind of not deal with the low-level data processing. We can basically turn this cipher, these three steps, into code in a very direct way. If we've got our key that has the letters to swap, then if we want to encode a string, all we really need to do is grab the characters, keep the ones that are letters and numbers, send them through our key, and make them lowercase. And then we grab five characters, make it into a word, and string those all together into a sentence. And Ruby gives us this really clean interface for all this looping that we would otherwise have to do. So we can just turn all this work over to the innumerable methods and not have to worry about the actual implementation. And so on some level, this is being declarative. This is saying, here's what I want. I don't really care how you do it. I just want to get this result. I want to have these things happen. I think declarative exists on a spectrum. So there we go. We talk about declarative. We're generally talking about hiding process somehow. And the more declarative we get, the more process we have hidden. The best example I think of this in Ruby, coming from Rails, has many, is perfectly declarative. It is the most declarative thing you possibly have. There is no process whatsoever here. There's no information on what's going on behind the scenes. This is just a claim. It's just a statement. This thing has many of those, period. And Rails takes care of doing all the database level stuff that we need it to. We just say, here's what I want. Go do it. So logic programs have this feature of being declarative where they let us take our hands off of the problem solving and just say, here's what we want. Go do it. And if you're following along so far that you're probably not going to be surprised where we're going next, relational thinking is the other part of logic programming languages. Now this is a little bit of a weird step to take if you don't have a math background, especially. Relations come up in math all the time. But when we're talking about relations, we're usually talking about categories or types. And we define the broad ways they fit together. And then the specific data within those categories, the specific members within those, fit together according to the rules of the relations that we define. So here's a simple problem. What is it? Thank you. 42, easy, no problem. This is something we can pass to any computer language whatsoever. There's an instruction built in on us on the chip. It's totally straightforward. And I promise you I'm going somewhere with this stupid little example. We can pass this in as it is. Now solve this problem. You and I know what to do here because we went to second grade and we made it. And we can take 18 and subtract it from 42 and figure out what we need. That's no problem. But here's my question. Why can't we just give this to the computer? We can do it with this. Plus it's pretty simple, pretty straightforward. Just say, hey, here are these two things. Why can't we do this? This is one further step. But if we just say, hey, Ruby, what's 18 plus question mark? I don't know what error it's going to give you, but it's not going to fly. This is getting into the world of relations. With our normal kind of plus, we're thinking about functions. We've got inputs. We crank those through our process and we get an output. And that's a function. A relation is working with what we've got. If we happen to have the two inputs and we need to know the output, great. We can do that. But on the other hand, if the computer knows how summing works, then relations allow it to work from an output back to the inputs or to work with whatever pieces of information we've got. And this is the other big feature of logic programming languages is that they give us this relational approach where we tell the computer how things work. We tell it, here's how plus looks. And it knows how to go forwards and how to go backwards and how to give us whatever we need with whatever we've got. That's one of the big promises of logic programming. We're going to be taking a look at that today. If this all sounds familiar, declarative relational stuff might be because you've used SQL. And SQL is probably the best example of a declarative relational tool that we're all going to be familiar with. SQL allows us to declare what our relations look like in the form of tables. We say, here are these different data types and here's how they match up. And we declare facts when we add rows to those tables. And then our queries are declarative. I mean, we do have a little bit of process going on with our joins, but mostly we've handed off to the server and let SQL decide how it's going to query the information and get us what we need. But where SQL falls a little short is that filling in the gap, that question mark, that inference. That's what we want out of logic programs. Being able to pass in whatever we've got, whatever inputs we have, or the output that we're looking for, and letting it figure out how to get us there. So let's talk about problem solving. Talk about strategies for how to go about doing this. This is a problem that was proposed in the mid-1800s. It's called the N Queens problem. And it's typically in the context of search optimization. The idea here is you've got an N by N chess board and you're trying to place a queen on each row and on each column such that they can't ever attack each other. And the trick is how many different combinations are there for different N by N boards. Most of the approaches to solving this are going to look pretty similar. They're all going to have several features. You're going to basically be doing a search here. You're going to pick a square, put a queen on it, and see where that leads you. You're going to block off the affected squares and then repeat the process. You're going to keep blocking off affected squares and trying out new squares for queens until you either run into a problem and you go backwards or you find a solution. And there's a couple of different ways we could do this. We could do breadth-first search. We could do a depth-first search. We could do some backtracking. There's Dijkstra wrote a paper on this that he included in one of his books in the 70s. But every solution to this, for the most part, has these features. And that sounds like a ripe opportunity for abstraction. I want to put a label on what's going on here. This is called constraint processing or constraint propagation. What we've got is a set of rules that we're going to repeatedly apply. Once we reach a fixed point where nothing else has changed, then we might have to take a different approach. But at first, we're just going to keep eliminating squares and narrowing down our space. And in Queen's problem, the constraints that we're dealing with are eliminating squares that are already attackable. And if there's only one place in a row or a column to put a queen, we better put a queen there. We can just keep doing that over and over again until we've got a solution. The other approach, then, that comes into this is branching. When we've reached a point where we can't do anything else just based on the rules, we can branch off and do a more brute-force approach. So at that point, we would pick an empty square and put a queen on it and keep going, seeing if we get to a solution or not. Visually, this is what that would look like. We've got our queen. We eliminate all those squares that the queen can attack. And we don't have anything narrowed down enough to be able to place another one. So we go on to branching. And we pick a column or a row. We put a queen on it, and we see what happens next. Then we go back to propagating the constraints. And we eliminate all of those attackable squares. And now we can apply that second rule. Now we do have places that are so narrowed down that we're forced to put a queen in them. And so we do, and our problem is solved. This is a smarter approach than just brute-forcing. This size problem, you could totally brute-force, but it quickly ramps up as your grid gets bigger and bigger. And so this constraint propagation approach is an intuitive one, a natural one that we would take. And it's also a smarter one than just trying to throw memory at it. Another problem that's really well suited as constraint propagation approach is Sudoku, familiar Sudoku. Nine by nine grid, got rows, columns, boxes that you're filling one through nine into. And you're not allowed to have any repeats. Pretty simple rules. You've got to figure out how all the numbers fit in. And you've got to have one through nine show up in each of those units. Now brute-force is possible if you've got a small grid and a lot of information. But the less information we have and the more we have to fill in, the worse it gets. This is fudging things a little bit. But if you had no information on it, a single row has nine factorial and possibilities. And then it keeps going for each column and row after that. Last year in her keynote, Asia Hamerly pointed out that this is a big O of n bang bang, which is horrific. And we shouldn't try it. We need to do something better than this. So constraint propagation comes in. And again, we can pretty much convert those rules for Sudoku into a set of constraints that we want to apply repeatedly over and over again. If we've got a slot that has one possibility or else a unit that's got only one place for something to go, we need to fill it in there. So here, in that upper right hand square, the two nines in the top three rows eliminate those possibilities. And we need to set the slot to nine. And then our next step after this would be to remove nine from the other affected boxes, from column G and from row one and from the boxes made up by GHI 1, 2, and 3. In our data structure, we would need to eliminate those possibilities. So these are some pretty straightforward constraints. And we can just apply these over and over again. This is code that does that. This is Peter Norvig's constraint propagation based approach to solving Sudoku. It's written in Python. It's elegant. It fits on two pages of normal sized text. I don't expect you to read it right now. But this is a good, elegant solution to this problem. And it takes advantage of Python's list of comprehensions and looks good. But we can do better. Logic programming allows us a route forward. As programmers, where we turn over the problem solving computer, and we don't worry about a lot of this stuff. Now, in Ruby, there are a few libraries that allow you to do some logic programming. And I'm going to talk about them a little bit more later on. For the most part, they're pretty direct translations of logic programming languages, either prolog or minicanron that was originally developed in Scheme. There's a great library by Tom Stewart that's a really faithful, elegant translation of the Scheme approach to doing logic programming with minicanron. But there isn't anything that I was able to find that really felt Ruby-ish to me. It felt good about Ruby. And so for the last little while, I've been working on developing a logic programming library that I'm tentatively titling Russell. And it's a general approach that allows us to turn over this problem solving and kind of wash our hands of it. And I'm going to take a look at how we solve Sudoku in Russell. Again, this is Peter Norvig's constraint propagation approach. And with a logic programming solver, we can instead do that's the wrong direction. We can instead do that, which is a little nice. It's a nice little reduction. But a little bit deeper in, if we look a little more closely, this is Norvig's solution without all of the data structure stuff, setting up what represents the rows and the columns and the individual squares and all that, not worrying about the I.O., the printing and the getting input. This is the core of the actual problem solving code. And with a logic programming engine, we can turn that into this. The engine doesn't know anything about Sudoku. Knows nothing about that. But we hand data structures that represent a Sudoku board and a few constraints to it. And it knows what to do. So let's take a look at what that looks like. Once again, we want to be declarative here. We want to try to just say what we're looking for and not worry about what to get. So we want to declare that each of these squares has to have the values 1 through 9, or is limited to the values 1 through 9, I should say. And we want to declare that the units, the boxes, or the rows, or the columns have to include all nine digits. Here's how we might do that first bit. We create a new logic solver, and we say for each square, this is assuming we've set up our data structure already. But for each square, for each little unit, we say your domain is the range 1 through 9. You're allowed to take some value in 1 through 9. Next, we want to say each unit has to include all nine digits. So if we have data structures to represent our rows, our columns, and our boxes, we can say each unit has unique contents. We're making that assertion, we're making that claim. And we can say those values 1 through 9, we're going to assert that each of them must be included in each unit. Each of them must be included in each row, or column, or box. So we're making these assertions to the logic solver. We're making them assert is going to the logic solver and saying, hey, take me into account. I'm a rule that you need to follow. And then we're off to the races. We've got a data structure that says, hey, here are known values already. I'm just using symbols to represent unknowns. Each square has a symbol associated with it. And we can tell the solver we're going to go through and assert that each known square is set. So this will set equal to a1 with 5, and it will set equal b1 with 1, and so on and so forth. And that's it. That's all we need to do is tell the computer, here's the constraints I'm working with. Go to it. There it is on one slide. So that's the promise of logic programming. Being able to boil down these puzzles, or these problems that we're facing with potentially unknown inputs, like with Sudoku, where we don't know really what's filling those in, to a known output state, or maybe the other way around. We could turn this backwards, and in theory, use these same sorts of things to take an output and work backwards to not quite completed square to hand off to someone else to solve. And that's declarative, and it's relational, and allowing us to get some inferences without having to worry about the problem solving. So I wanted to get a little bit on what's going on under the behind the scenes here. One of the great things about a good logic programming library is it allows you to create your own relations. So we saw in these examples, we saw equal, for example. We saw has unique contents, this domain bit. And the relations that we deal with, or really any relation that we need to deal with, any thing we need to describe to the computer to solve these problems, can be built up out of very simple logical building blocks. There's a concept called functional completeness or expressive adequacy, where if you've got a few very simple logical operators, you can do anything you need to. And with saying two things are equal to each other, like a logical and, or two things are not equal to each other, and negation, just these few logical operators can build up pretty much any kind of bigger, broader relation we need like has unique contents, for example. In fact, this is on some level what your computer does all the time with ones and zeros and the chip having logic gates in the form of silicon. And it builds up everything it does out of very simple little logical building blocks. Here's has unique contents. If we pass a collection to has unique contents, we're trying to say, hey, no two values within this are the same. And so what we can do is just boil that down to no two values are the same. No pair can be equal. And we make that assertion, and that gets that set. Just boil it down to this underlying logical operator. So next one's a little more complicated. Exactly one member, if we want the collection to only have one of something in it, we can represent that by saying, for any of the members, it can be equal to that, and everything else will not be. So it's going on here with any. Any is prepared to accept a list of possibilities. And we take the collection and create a new possibility for each individual member, and use the splat operator to send that back out to any as a list of possibilities that it needs to take into account. But it's built up out of these simple logical building blocks, equal for the one member that we want it to be, and not equal for all of the others. So this makes a logic programming library extensible. This means that if you don't have something built in by the designer to deal with your specific data structure or having a specific constraint that you need, you can build it out of existing constraints. And you can build it up out of the rules that are making up the core of logic. Ideally, with my next iteration of this, I'm going to make this even more rubious. Right now, I'm passing the collection as a value. I'd like to be able to call the method on the thing itself so that it looks like a real logic programming language. But we'll see where that goes. Under the hood, we've got a few interesting things going on. And if you're not interested in implementing a logic programming library, this might be a little bit less of interest to you. But that said, if you're interested in this topic at all, I spent a week or two trying to just find a resource on some of these things I'm going to tell you about. And they turn out not to be that difficult. They turn out not to be that crazy. But there just isn't much out there on it, it seems like. So I'm going to give you a kickstart into kind of grasping what's going on in these different tools that you have available to yourself, to you. The core of most logic programming engines is something called unification. Unification is a sort of pattern matching with a knowledge stored in a data structure within the solver. It's been called two-way pattern matching. This is going back to that relational idea. Unification is what makes logic programs relational and able to fill in the gaps here. Here's a relational version of concatenation. If we've got these two arrays and we know we want it to look like the third one, then the unification process is going to dig into this and is going to match these question marks up with the values that they end up needing to hold. And the way it does this is it kind of goes piece by piece, value by value, through the different structures and tries to match them and builds up a store of what it knows. And then we can query that store later on, either when we're adding new information or when we're trying to get an answer to our problem, to say, hey, what's going on? What do we know about these previously unknown values with the information that we put in? The substitution list is how that knowledge is stored. And it's pretty straightforward. I just want it as a hash. When we assert that P matches with 5 or P unifies with 5, it stores 5 with P. We might say, whatever R ends up being is what Q needs to match with. And that's that second line. So we already know P is 5, but now we add this new bit that Q has to equal R. And later on, we add that R ends up being 8. And there's this process called walking through this list, where when we ask about Q, it jumps to R and says, OK, now what's the deal with R? And it ends up getting 8. So later on, when we query, we say, does P match with Q? And P is 5, and Q is R, which is 8. And so we can make that comparison. So that's the knowledge storage mechanism that's going on under the hood, at least in this implementation of a logic library. The algorithm for doing this is a really powerful one. There's a choice quote. It's just great. It compares unification to a carpenter's tool. All the normal programming tools are put away, and the carpenter has to do everything with a buzz saw that has changed to the ceiling. It's really powerful. It'll do everything he needs, but it's a little bit tricky to use sometimes. That said, the core of what's going on is actually pretty straightforward. The core of how this puts the pieces together is relatively straightforward. Basically, you add in two things that you want to match up, and the system can give you three results. Either it already knew that they matched, and you don't need to do anything. Don't need to change anything. Or it ends up finding that it already knew they had contradictory results to what you're telling it, and it fails and says no go. Or it can take this new information and say, oh, good to know. I'm going to extend this substitution list. I'm going to extend what I knew and add this in. And that's what would happen if we had a still unknown value, and we're adding this new information, or saying, oh, this new piece is what we're missing. So here's an implementation in Ruby of the things we're talking about. We're going to get to unification in a second. For constraint propagation, this is pretty straightforward. It just follows what we're doing. And this is a really naive approach. This is not optimized. But we start off looping through, and we want to loop until things have changed, or until things stop changing. So we've got some flags that we're going to reset. But the key thing here is that we're going to take all the possibilities that we haven't solved totally yet. We're going to loop through them and apply the rules. And we're going to keep the ones that succeed, keep the ones that don't end up contradicting something. And we keep going until nothing changes anymore. And that's that. There's a few things going on in the background where the rules know how to mark changes and things, but a pretty intuitive approach where we're just looping through until we reach a fixed point. The unification mechanism in the background is this. And we're going to break this down. But it fits on one slide. It's not too crazy. And it's the core of what you need to know if you're trying to figure out how this is happening on your computer, on your machine. I mentioned walking. So we don't just want to put raw data into our system. We want to put the most refined up-to-date data we have. And that means if we already knew that Q matched with R, then we want to keep going from that point rather than starting over from Q, for example. So walk the values to start. And we're going to create an extension to our list. We're going to create new information. I'm using symbols to stand in for unknowns. If we've got both unknowns and they're already the same, well, we don't need to do anything. We return an empty extension. We do have no new changes to make. On the other hand, if we've got an unknown, either of our two values are unknown, then we want to set this new information into our store. And we want to add this to our extension. We want to add this to the new information that we're going to be adding in. So if we had an unknown and we add in 5, maybe the left input was our unknown and the right was our 5, we say, oh, that previously unknown thing, it's going to be set to 5. And if all of that fails, if we don't have a logic variable, it's called, if we don't have an unknown, or if we've got a data structure, like an array, for example, that we're trying to match up, then we need to pass that off to be broken down into the constituent values. So the expectation here is that the array, for example, will know how to break itself down and to compare each value in turn. And that just happens by passing them back to the same method, passing them back to the same unification. So that's the idea. We're taking new information, and we're either adding it in or finding that we already knew something about it, and we're returning a value to show that we don't need to do anything, or we do have new information to add in. Now, unification, like I mentioned, is a really powerful thing, and it turns out that this is also something we can use for keeping things apart. If we're trying to unify, we're trying to say things are equal. We might also look at disjunction, at keeping things apart, saying they're not equal. And it turns out that unification is the way we can do this as well, or a convenient way to do this as well. If you think about it, the flip side of the outcome of a successful unification is that we're saying two things match. That's what we're adding into our knowledge store. The flip side is we don't want them to match. And so if we test to see if they would match, and they don't, that's what we're looking for. If we don't want things to match, then we just test to see if they do, and we avoid it. On the other hand, if the potential unification comes back and says, hey, already set, we don't need to make any changes to add in this new information, that's not good, because that means in our existing unification store, in our existing knowledge base, it already thought that these two things matched up. And it says, we don't need to do anything new. So we would invalidate the solution at that point if we're trying to keep these things unequal. And the other outcome is it comes back and says, oh, I didn't know anything about these. And we say, great, let's make sure we add that into our list of things that should stay that way. Add that into our list of inequalities and say, don't change that, keep them separate. There are more ways to use this unification algorithm. It can take rules, and it can test to see whether one rule covers all the territory of the other. And you can just get rid of the other one then. It can kind of take care of its own efficiencies on some level. What I've implemented so far is way more naive than that, doesn't go there. But this tool is what you're looking at if you're thinking about logic programming, 99% of the time. I want to talk a little also about what's going on behind the scenes with the API that we were looking at. We saw there was equal and not equal. And the big idea here is that we're creating a new piece of knowledge when we make this assertion that two things are equal. So when we assert that something is equal, equal is returning this new piece of knowledge that says, hey, these two things are unified. And similarly, not equal is returning a piece of information that says they're not going to be the same. So equal and not equal are kind of API wrappers on the unification and the disjunction going on behind the scenes. They return lists because we need to keep track of branching possibilities. If we want to be able to say this could be true or this could be true, then we need to keep those separate. We need to have each one be available to us. So we can implement logical and and logical or to accept these lists of possibilities and figure out what to do. What's going on here with all, all is like saying we need all of these things to be true in the same space. We need to combine everything. And so the store of knowledge knows how to take the unifications and add them into the unifications it was already aware of. How to take the disqualities and add them into the disqualities it was already aware of and so on. And it combines those existing combinations with the new information that we're adding to it. Product is taking the existing ones and applying each new piece to each existing piece. Any is pretty simple. It's just keeping things in a list as separate individual possibilities. And these are those building blocks I was talking about. With these, we can build up to whatever logical operator we really are interested in, whatever relation we're interested in. How are we going to use this? I want to talk about how and I want to talk about when. Like I mentioned in Ruby, there are some options. There's an external program or a logic programming language called prolog. And there are a couple gems that implement prolog-like approaches. They haven't been updated in quite a while. And you kind of need to know prolog to use them. So if that's up your alley, then go for it. The approach that I'm using is based on mini-canron. And mini-canron is that embedded language that was originally written in scheme. This is what I mentioned earlier by Tom Stewart, who spoke here at RubyConf a few years ago on a different topic. He did an elegant translation of that scheme into Ruby. And that's a usable, although very, very minimal, approach. So that's the canron gem. And that's available. There's other areas in this field that we don't have time to talk about. Boolean solving is something that you can look into. And there's a gem that wraps around a pretty common Boolean solver. And like I mentioned, there are external logic languages. Prolog is the most common one. Mercury is a more developed, maybe not more developed. It's different. It's kind of based on prolog. And there's a very recent one called PyCat that's just coming out in the last few years. If you want a few places to look to see different examples of the kinds of problem solving approaches or interfaces that they offer, these are the things you can take a look at. When are we going to use this? I mentioned these three big characteristics. We want it to be declarative. We want it to be relational. And we want to be able to have inference. And if your use case isn't really aligned towards this, some logic programming might not be what you need. For example, if we don't know how the world works and we have no way to describe the relations, we have no way to describe how things fit together, then we're not going to be able to have a complete solution, at least, within a logic programming approach. The other issue is if we've got too little information to go on, this doesn't have any secret magic sauce that cuts down on our computation time. If we pass it an empty Sudoku grid, it's still going to have to do all that brute forcing that it would anyway. The constraint propagation only really works when you've got data. It's somewhat more efficient, but it's not going to be just raw brute force, but it's still going to be trying a lot of possibilities without narrowing it on a particular solution. So if we've got too little information to be able to narrow down our search space, then we're still stuck. This isn't pure magic. When can we use it, though? What we're looking for are situations where our inputs or our variables that we're dealing with are constrained and where they can take on a number of different values. Finite domains, any area with finite domains is something that this could be applied to. Some particular examples, we might have a system that we're trying to generate possibilities within. If we can describe how the system works and how the different variables range over, then we can say, how can these all fit together? And we can ask it to come up with solutions. I'm going to take a look at an example of that here in a bit. The other big approach, or the other big area where logic programming is used, is in rule enforcing systems or in expert systems. And one of the prologue implementers claims that about a third of airline tickets around the world are going through a logic programming system. I presume to do this kind of rule enforcement to do the fair checks and whatnot. Logic programming, I believe also prologue, was used by IBM for Watson. If you saw the Jeopardy episode a few years ago, where Watson went up against the humans, it uses prologue for natural language processing. And it has rules about how language works, and it gets whatever inputs are given and is able to figure out the rest of how things should look and decide whether it's a query or a statement or things like that. So these are systems where we've got kind of constrained domains and where we're able to intelligently search but narrow in on a solution with these pattern matching or constraint propagation type approaches. One of the examples that I'm most interested in is something I found out about probably a month ago. A guy named Oscar Vickström in Malmo, Sweden, published on his blog, his work in developing music exercises using closures, core logic library. And music is a great example of where this shines. There's a lot of different things that go into music. There's a lot of different variables. But they each have a range of options that we can kind of get our hands wrapped around. Note length, only so many different notes we can, or different lengths of notes we can play and only so many different pitches on the scale. Only so many different keys if you're familiar with your circle of fifths. And each of these variables that goes into it, we can define a range of potential values and leave it up to the computer. And you can imagine something like this. This isn't his code, but you can imagine something like this where we're using this pattern matching and this inference to say, hey, I wanna practice my eighth notes and my sixteenth notes. I don't wanna use the treble clef because I don't know bass clef because I'm a trumpet player. But whatever key it's in, I don't care. If it's a waltz or a march, whatever, I'll leave the time signature up to you. And we can just hand that off to the computer to be creative in generating something that fits our requirements. This is the result of that. This is music generated by logic programming. And this is Oscar Vickstrom's example of that using Clojure's Logic Library by putting in these constraints and saying, go to it. So that's the promise of logic programming. We define how the world works and in a simple, straightforward declarative way, we get answers. I'd like to see more of this be happening in Ruby, like Matt was saying this morning. We wanna keep moving forward. And this is an area that many languages don't have comprehensive solutions in. Clojure has a great one. JavaScript has an implementation of mini-canon in it. I wanna see more developed tooling in Ruby and I'm gonna keep working on this. And if you're interested, try writing an expert system in one of these tools. Pick it up, take a couple of weeks to take a look and write a Sudoku solver, or write something that narrows things down with music or some other area that has these constraints. If you're interested in learning more about this, Clojure's CoreLogic Library is probably the best developed example of one of these embedded libraries. And you can take a look at that. Mini-canon, that embedded approach is still fairly academic, but the guy who's done probably the most work on it is named William Byrd and you can take a look at papers. He's also got like a 24-part uncourse on YouTube that goes really in depth. The Art of Prologue is a great book on prologue that you should just read because it's one of those mind-expanding things. And that's from, I don't know, the 80s. I think it's been around for a while. You can get a copy for cheap. Constraint Processing, if you're interested in that constraint propagation approach, this is more of a math textbook. And if there's one thing you do, please make sure you look at sentientlang.org. I just found out about this like two days ago. I don't know much about it, but it is an awesome example of how this approach can be put to use. It's got a number of different examples that run in the browser of this seemingly magic problem solving that logic programming lets us do. So that's sentient-lang.org. Take a look at that. There's no affiliation with it. It's just awesome. Thanks. Later on, I'm gonna hopefully have a blog post up that goes into depth a little bit on some of these. And if you wanna take a look at the code, I warn you, it is absolutely a first draft and Hemingway is absolutely right about first drafts. So, you know, buyer beware, but it's up on GitLab. Thanks for coming.