 All right, so I want to thank everyone for being here today. This is the February meeting of the Houston functional programming user group, and today we are really excited to welcome Wilbur. Who is. He co created the camera language and co author the reason steamer. And from what little I know of him is a very nice guy who is really interested in really bad. So, I'm going to turn it over to you will we will record his talk and the first part of the q amp a and then we will turn off the recording and we'll continue our q amp a. So, well, it's all yours. Thank you. And by the way, since you're in Houston, if you haven't already, maybe you can invite Dan Friedman, because I don't know if you know this but Dan Friedman actually wrote the little lisper when he was in Houston. He was, he was a faculty member at the lbj school of foreign affairs or whatever, international affairs or something and that's actually where the little lisper. What was that. I did. You attended that you attended. Yeah. When Dan was teaching it. I don't remember for sure. Yeah. Okay. Well, that's, it's a small world so. Yeah, and we have an IU are here so that's great. So anyway, I, you know, if you like this sort of thing, I definitely recommend inviting Dan. So, thank you for inviting me and how did you decide to invite me of all people. I'm just curious. Usually I get a big head and it's like, Oh, my work is well known and then they'll usually say something like, Well, we had a list of three possible speakers and you're the only one we hadn't ever heard of. So that's usually what I hear right. So I'm just curious. Not at all. So, I certainly know your work. And I think I was asking people in the group like, Who would you like to hear from. And I said, I'm an academic, I am unashamed and I will ask basically anyone. And if they don't want to come, they just don't respond. That's right. That's right. They'll just go shoot. So, no, and, and, and, no, but I think you were one of the first people that we invited for this year. All right, well, I appreciate it. Thank you. Yes. I'm going to talk about. Oh, there are like, timer wins. Oh, J. Yeah. Yeah. She she she's interested in it. Okay. Okay. Oh, it was the previous organizer of this group. Oh, I'll suggest your name. Yeah. All right. Well, thank, well, thank you very much. I gave a talk in Japan was I gave a keynote at a conference where all my papers have been rejected. Well, finally, I've made it right. And then, you know, I asked later, it's like, oh, yeah, why'd you invite me? It's like, yeah, you're the only speaker we hadn't heard of, or work we didn't know about. It's like, well, you know, hey, that's actually, that's actually a pretty good algorithm, right? If you're going to invite people to invite the people where you don't know about it. So, you know, that's done a little bit the first, first few times I heard that answer, but then it's like, actually, that's a great strategy, I think. So, thank you. It's like us to tell you we hadn't heard of you. Yeah, you know, I mean, okay. Yeah, a funny, a funny thing happens because it's originally when I started working on many Cameron. And the relational programming stuff, even before there was a mini Cameron one that was Cameron. And then we started working on the book. Eventually when the first edition of the book came out in 2005. And so it was Friedman, or no, it was Bird, Kesseljoff, Friedman, I think. I forget the order. I think that was it. And everyone's like, Dan and Oleg have a book. And that happened for like 10 years. And now it's, oh, Will Bird created mini Cameron is like, no, you know, neither one's exactly true. But that just kind of the way things happen if you work on something a long time. And people sort of give you more credit than you deserve. At this point, there have been a lot of people who've worked on some variant of this work and too many people for me to thank individually, but it's been about 20 years now since we've started working on the mini Cameron. Well, okay, so I saw that this this goes, you have like a two hour block and speakers go from 30 minutes to whenever, you know, I have a bad habit like I'll talk the whole time so I'll talk for 40 hours I have talked for 40 hours for on mini Cameron so you know, I think it's more like what do you all want to do I'm kind of in a chill mood I'm happy to be interactive and I can show you some things and we can have some discussion or I can give you a typical demo or I can talk to you about you know why many Cameron sucks and we need a new version whatever you want but I guess part of it is some people might not be familiar with logic programming or relational programming logic programming. So I could start with just kind of, you know, my high level take on what those things are but on the other hand maybe people, you know, are somewhat familiar with that so I definitely, I definitely think jumping in and speaking over Jared that an introduction would be excellent, and then two questions because they're like to know why it sucks. Okay, yes. Well, you know all languages suck right. It's sort of like in physics you know all models are wrong some models are useful and you know there's like the two classes of programming languages, the ones that you know people complain about and the ones no one uses. Okay, well sure I'm happy to give you a demo. So I promise to talk on, you know, sort of what comes next or next generation of language. And, and I'll be honest that, you know, I'm still in the ideation phase so I could talk to you about why many Kevin sucks and so do all the other languages that are trying to do this and, you know, how I can imagine language that might suck less. Probably suck a lot. But you know maybe we can incrementally get towards suck less language. So I'm happy to talk about that after I give the intro and I know there's a wide variety of of experience with logic constraint logic programming mini Kenran so forth so some people may have implemented mini Kenran and some people, you know maybe haven't heard of it before so that always makes it interesting. I'll give a demo and I'll talk about kind of the high level philosophy of what the style of programming is like so this is the functional. You know it's funny to me that in the functional community. I'm known for not doing functional programming basically I mean we have a functional implementation of a constraint logic language but you know at some point we took over about half of the scheme workshop half the scheme workshop papers were on many. Maybe it's time to have our own workshop because it's kind of like well when everything's constraint logic programming is it really the scheme workshop at that point. All right, let us. I do some screen sharing here. All right. Okay, so here's my usual spiel talking about what is what I call relational programming. As opposed to functional programming so you know I'll just give you just do that. Okay, what is relational. Programming and you know you can sort of think constraint logic programming. If you're familiar with those terms. How does it differ functional program. Okay, so we all know and love functional programming. Maybe. And so functional programming is based on a great idea. Now, now, there's a lot of debate actually on what functional programming is if you ask Bob Harper what functional programming is he'll say scheme programs aren't even but not only are that functional programs are not even programs. Okay, they're like fragments of syntax. Okay. In fact, Bob's worldview. And replace it with my own. Okay, so, but in any case if we think about what the idea of functional programming is the part of the idea is that say we have a notion of like addition. And we might say in scheme syntax or racket syntax or list syntax, we want to add three and four, and that expression on the left would evaluate to seven, and we say plus is a procedure or a function, let's say, let's say it's a function that can take let's say non negative integers and in some of them. And so the result of calling this function is a value which is seven. Okay, so, so here, we have the notion of a function. And we also have the notion of input. And we have the notion of output. Okay, so this this dichotomy or this difference between input and output is really critical. And we're just so used to it that I think most of the time we don't think about it. I'll claim to be provocative that this is actually a really bad thing that functional programming is really bad. We should all feel ashamed we should feel bad for doing it for advocating it. But it's really bad because functions, you know, that's that's a very poor way of looking at the world. And what you really should think of is relations, so we should think that instead of this, this false dichotomy between input and output, instead we should say we actually have a triple. I'll call it plus oh that's just our convention many can run. And we have a three place relation. Okay, and there is no difference between inputs and outputs that concept is gone. All right, it's sort of like if you talk to the small talk people, you know, Dan, Dan Engel says that the idea of an operating system is a bad idea there shouldn't be an operating system an operating system is all the junk that you can't put anywhere else. Okay, then in small talk they don't have an operating system. It's just objects in a virtual machine. So, in the same way I will claim whether or not I actually believe this I will. I will mean silent, but I'll claim that functions are bad functional programming is a bad idea. We need to move past it. Okay, so if we want to get to higher levels of abstraction, we need to talk about relations. Why, why are relations interesting relations are interesting, because they're very flexible, because in addition to the notion of a relation, we're going to think of in terms of like an algebra. Okay. And algebraic reasoning. If there are any hascolars out there, you know, close your ears. Well, but but anyway, you know, so we're going to talk about sort of like middle school or high school algebra. Okay, we're going to talk about like, you know, Haskell algebra or maybe I think there's certainly a lot of algebraic reasoning here. But the idea is that we can have unknowns represented as variables. Okay, so if we can say something like with our four place relation. And plus three, four, seven, we could replace. If we wanted to that third position, that third argument with a variable that represents an unknown quantity. And then we can ask a query. And the system somehow will figure out that Z is equal to seven. And that will give us a solution in terms of equations or dis equations or constraints in general. So, so now we can ask these sorts of questions and ask queries, you know, set up, set up constraints and relationships and ask queries so then we could just as easily do something like say hey, maybe we don't know what why is, you know, tell me what why is, or more interestingly, we could say something like we don't know what x and y are together as a pair. Tell me what those values are. And now you're saying things like, well, if x is zero, you know, why is seven, and you also can say that if x is one, you know, why is two. And in this case, there are finally many if we restrict ourselves to natural numbers but if we relax this and said all the integers. Now there'd be inflame many solutions. So, so now we're in a space where we're talking about not just one answer and not just functions that have inputs mapping to outputs, but we have relations and we have placeholders. So now we have a solver and we can ask queries and solve over these sets of constraints that building up and let the underlying computational system do sort of the handy work and the dirty work of figuring out what those assignments are and it's very similar to, you know, solve for x or solve for x and y in say a high school algebra problem. So that is, is the high level idea. Okay, so we're trying to get additional flexibility. And so we have additional abstraction in several ways. So first of all, we have removed the idea of input versus output. So that is not necessary. It's not necessary idea. And, you know, we're more abstract by removing it. Instead, there's relationships and you can, by the way, map this very directly to databases and database tables, right. So instead, now we can think of the table of XYZ values that satisfy this relation and do the table could be finite the table could be infinite. If the table is infinite, then we're going to have to generate the table lazily. If it's finite, then we could write out all the table beforehand if we want. And now when we're building up a program that's represented by a bunch of constraints. Now you can think of that as doing joins over these tables, which could all be infinite, potentially. And so you have to do a lazy part of the computation. So it's abstract in a way that we claim that functional programming isn't and goes beyond functional programming in some ways. And we now are playing these sort of algebra tricks. And there are some other interesting properties. So I will try to make this a little more concrete. So I'm not going to show you numbers. So one of the good and bad things about many Cameron. It's like, Oh, I want to create a program synthesis system for a turn complete language that supports recursion and symbolic, you know, list based manipulation. No problem. Easy. I want to add two numbers. I want to add two integers or two natural numbers like okay sit down, calm down, and we're going to have to have a have a have a talk. Okay. So that's one of the trade offs right now with many Cameron. So I'm not going to show you this because it turns out that that this example, which while it's easy to explain is actually quite tricky to implement and many cannon. And we did have we have ways to do it but we have many ways to do it. So that's not necessarily good thing. The sort of first chink in the armor, if you want to think of that way. Okay, so let's load up. And the Hello World of functional programming is of course, what? Hello. What's the Hello World of functional programming, do you know. Factorial factorial is the Hello World of functional programming. If you really want to go wild you can do Fibonacci but usually factorial is like the Hello World. Okay. So here the Hello World for constraint logic programming or relational programming is a pen working cat or whatever so in good old scheme. So we can do things like a scheme's got symbols and less so we can append the list ABC to the list DE and we get ABC DE. Okay, great. So that is the functional version. We have concatenated these two lists to get another list so we have clear inputs. Okay. And we have a clear output. Now we're going to do the similar thing in mini Cameron and I'm going to define a pendo. Okay. So let's do how about LS and LS. That's the concatenation of L and S. And I'm not going to get into all the details of how many camera works I mean we can backtrack and I can explain exactly how this works if you want. I'm just trying to paint the big idea right now so so excuse me for, you know, just kind of sketching over this but I mean I'll explain at a high level what's going on. So, so basically, we're saying that appendo is a relation that we're representing in scheme, which is the host language for which many cameras extending appendo is a three place relation. It has two lists, L and S, and LS the third argument is the concatenation of L and S. Okay, I don't I don't want to talk about in terms of inputs and outputs and just talking about their positions. So L allows us to make a choice so we're going to have to candy clauses so they're two different choices, either the if the first list is empty then it turns out that the second and third lists are the same list is equals. That's a type of a quality based on an operation called unification or technically first order syntactic unification. There are many types of unification turns out. We can also introduce temporary logic variables to these variables that are like placeholder things where you can do algebra over these are what are called logic variables. So fresh allows me to introduce some logic variables. So I can say L might be a cons of say a and D that's the card in the cutter to schemers or lispers, and we'll say that LS is the cons of a rest, and then we're going to do a recursion. We're going to do a recursive call to a pendo on D s to rest. Okay. So some of these elements if you're familiar with functional programming you should be aware of like I said, you probably know what lambda is right is a lambda takes three arguments. This can be is kind of like conned in lisp or scheme, which is my car, McCarthy's cond operator allows you to make a choice. So it's like kind of like an F. Okay, it's like a conditional. The difference is in a relational setup. We try all possible choices, not just the first one that has a special guard or special test. Okay, so all the, all the choices could be tried and could produce answers. Fresh has two purposes is going to introduce some of these variables that we've looked at before in the addition example that these like logic variables that we can do algebra over. And then here we're both destructor destructuring L saying hey L L was going to be a pair, and we're calling the head of the pair a and the tail of the, you know, the pair D. And that's one way to look at it or L itself coming in could be one of these logic variables that we do algebra over in which case, and it may not even have any structure maybe it doesn't have any value associated with it. So this is a big change in this world. We are allowed to do operations on variables that we don't know anything about. Okay, it's not not like we're going to get an error. When we operate on these variables, it's just that when we perform operations involving those variables, we might give those variables more structure, or even completely ground their values, or we might find out whatever operations we're doing with those variables is inconsistent, and then our entire computation can fail through inconsistency. So here we're saying L, which is, which could be a logic variable representing that, you know, with structure we don't know, well it's got to be a pair. And the pair, you know, it's got to be a pair of A and D. Since A and D were just introduced at this point we know that they don't have any values associated with them. But then over time, like in the recursion, you know, D might get structure, or because LS might have structure coming in, the fact that we're associating a with LS, you know, maybe a gets structure so there's this whole game where we can accumulate information over time in different ways. And the point is, we don't care if L already has structure or not coming in to a call to a pendo and we don't care if s or LS have structure. This program will work if they're fully ground and have concrete values for anything, or if there's no information at all about LS and LS. And so we're allowed to reason more abstractly. Okay, so that is a simple mini Cameron relation. Sort of the hello world of constraint logic programming. And now we're going to do what's called a run. So like I said, many canons embedded in a host language normally scheme racket. You know, it could be in Haskell, it could be in closure, like CoreLogic or it could be OCaml like Ocanon, whatever. And Java. So we are going to use this run interface operator to act as an intermediary between our host language which is scheme in this case, and many camera because scheme doesn't know about logic variables scheme doesn't know about, you know, condi or fresh or any of those things. Obviously, there's some magic behind the scenes which I'll show you a little of. We need somehow a way to talk to scheme. So we can write our regular scheme code, and we can also write our regular mini Cameron code they kind of exist in two different worlds but they, they can send, you know, they can interact in a way. Okay, so here, the run. We say we want one answer back if it exists. In Q is what we call our query variable. So whatever the value associated with Q that's what we're going to see at the end of the computation. And now we can call a Pendo we can do or similar, you know, like we do something like our first example. But now I can, I can put Q or query variable in that third position of a Pendo. Let's see what happens. All right, so what we get back is a list containing one list. Okay, it's not the list ABCDE is the list containing the list ABCDE. That's important. All right, so here we've appended these two lists or concatenated the two lists and got back what we expected. Okay, that would be kind of boring except now we can play this game where we can put a list in that third position actually we can run a computation right now. And we get back this kind of strange looking value list containing underscore dot zero underscore dot zero is a representation of a one of these query variables, not being associated with any concrete value. It means that Q could be anything you can think of this in logic like an existential. This is sort of like saying, actually, in this case, you can almost take it like a universal and sort of like saying for all Q for any value of Q. It is the case that appending ABC to DE gives you ABCDE. Maybe one way to think about it. But we can also put the Q and other places like right here. Okay, so now we've changed the meaning of this query. So now we're asking for what value of Q is it true that appending the list ABC to Q gives you the list ABCDE. In this case is pretty clear that that would be the list DE Q would be the list DE so we're seeing what values associated with Q. Now, more interestingly, we can have multiple query variables so we could say, okay, for which values of X and Y is it true that appending X and Y gives us the list ABCDE. So now we see that we have a list containing a list containing two lists. We have a little more more structure because now we have two query variables. But the main point is that X is going to be the empty list in the first answer, and Y is going to be the list ABCDE. And we can ask for a second answer. In this case, we get two results back. And in the second case, X is the list A and Y is the list BCDE. We could ask for six answers. We could ask for a bunch of answers. We could ask for all the answers with what's called a run star. So instead of putting in the number run star just says give me everything. In this case, there's six answers that we can. We can also play slightly more advanced games. So for example, I could say, well, how about this? How about I have a list. I'm going to change it to one. How about we have a list. I had a quick question. What happens if there's no solution? So I think you showed what happened when there was exactly anything with the solution, but I was just kind of curious what happens if there's no solution. Well, let's make a simple one. How about we say A to B is ABC. Do you agree that there shouldn't be a solution to that one? Yes, we can coordinate A to B with ABC. And sure enough, what we get back is the empty list of answers. Okay, that's a good question. And then I'll show you one other case, which is, you know, we can also do this. We could say X, Y, Z. Okay. Now what do we get back? What do you think we should get back in this case? All of this. All of this. Well, okay. So this is interesting. This is interesting. So notice how last I asked for a run one. Okay. Okay. Okay. Let's see what we get. Okay. So here's what we have. We get, we get a somewhat abstract answer that basically is saying if X is the empty list, and Y is anything, then Z is the same thing as Y. We've tied together the values of Y and Z. We've represented infinitely many concrete values using one somewhat abstract value. But this isn't a completely abstract value because we still say that X has to be the empty list. Okay. So this is an interesting answer because it's sort of a combination of an abstraction and also concrete. The first thing you might notice. Well, is there any constraints saying that Y has to be a list? Maybe Y doesn't have to be a list. In fact, if we go back to the scheme, I don't know if you knew this. Is that legal in scheme? I know all the MLers and Haskellers are going to tell me whether that should be legal. That works just fine in scheme. Okay. That's legal. It has good taste, I don't know. Okay, but we copied sort of the semantics of scheme. So there actually is no restriction that Y and Z have to be lists in this case. All right, so we're copying the weird scheme semantics where we can have improper list. Basically. What is that? Basically. Sorry, what's the conspiring? Yeah, so it's basically treated it as a cons pair, like when you cons two items, like two items together. Yeah, that's right. Yeah, this is, you know, I could do a simple, I mean, probably the simplest example. You know, I could do like that. Right. And so that's just the same as conzing. We're at 80 onto five. Yeah. So you're allowed to have improper lists as a result of the pen and scheme. Now, now if I, if I really want to prove that to you, by the way, I could, let's see. I've been learning Dvorak on the fancy kinesis thing. I can't type it off. And so now this is my first time to type it on a real keyboard and ages. All right, here we go. So, yeah, now what I could do is I can, I can do a conjunction. So I can actually add another constraint saying that why is five. Okay. So now what do you think we'll see. Let's find them. Is that what you're expecting? Given what we said. Yeah. Okay. Now, okay, so here is something that I think is cool. And as far as I know, I'll, I'll claim you can't do this in Haskell or ML or scheme. Okay. In general. I can do this. I can reorder the conjuncts. Okay, so I have constraints that are in a conjunction. There's an implicit and if you want to think of it that way inside of the run, or I can make it explicit. That turns out that fresh which introduces these new logic variables are lexically locally lexical scope that's also like an and. Okay, so this is like an and or conjunction. I can do part of those things and preserve the semantics and that's true in general. There is a caveat. So I'll tell you the caveat a few minutes, but I'm allowed to reorder things. And if I go back to the definition of a pendo wherever that is. Here we go. I can, you know, okay, so here we have within a condi clause. This is a conjunction. This is an and. I can swap those two. I can swap those. I can swap those two. I can swap this I can move these things around. I can even take the two condi clauses which are like an or and swap those around. Okay, so I can play all these games. I can also go within a single equal equal and swap the arguments there. Okay. I can do all those things. And if I run my program, I still get the same behavior. Try that in Haskell. Okay. So, you know, in some sense, this is this is a more abstract model. Now, it turns out that there's a little problem. You know, there's always a cat tree. So the catches. I did a run star. Let's try that again. So I want to get all the answers back, right? Oh, Oh, what in a terminate, I could do a run six. Okay, there's six answers. Great. What if I do a run seven? Oh, infinitely. Why is that? I'm going to go back to our pendo. I was swapping things around with wild abandoned. It is true. I can swap all these things around and they logically are equivalent. As long as an answer exists. So many can run uses what's called a complete interleaving search. If an answer exists, if you had unbounded amounts of time in memory, you would find the answer with the many can run search. So if you do of the default prologue search, for example, for example, if you're familiar with that prologue uses a depth first search, which is incomplete. And there are certain cases where it could search forever and never find an answer that exists. And many can run in theory. Many can run always the search would always find the result. However, if an answer doesn't exist, if there is no answer, there's no guarantee that many can find it. Part of this comes down to the halting problem and part of this comes down to many can run using search is not very smart. If a programmer has insight into the problem, their tricks they can use to to try to get better refutational behavior. But in this problem, this program, the core of the issue is this recursive call coming at the beginning of a conjunction. The problem isn't that we swapped the two con D clauses, the or part, or that we swap these two unifications. The problem is that this this recursive calls coming first, and many can run executes within a single conjunction, many can run executes the goals in order from sort of top down. So the problem is, there might not be an answer. The problem is that one of these equality constraints or unification constraints might fail, but the problem is that the failing might happen after this recursive call. And if you look the the D and the res are fresh they just got introduced they don't have any value. So D and res are fresh at this call as might also be fresh. Even if even if the if S isn't fresh, you still have the problem that you're going to keep entering this con D with the first clause trying this recursion. It's just going to keep trying to do the recursion forever, looking for an answer that doesn't exist. If the answers exist, like those six answers exist and we only asked for six answers, then it works just fine. Okay, the problem is we don't have refutational completeness we have completeness. The search will find answers that exist. If there's enough, you know, memory and enough computation, but it might not ever be able to figure out or prove that answer doesn't exist. Now in this case, it's relatively easy to fix the problem, because we can just put the appendo at the end. And now if I try this again determines okay, but the problem is in a more complicated program you might have multiple recursive calls or multiple calls to helpers that are recursive, and you can't put them all last. Okay. And in those sorts of cases, it will depend on whether or not one of these arguments coming in is the logic variable, or when that is ground, as to whether or not you get refutational completeness. So this is the Achilles heel of Turing complete logic program. This is true in Prologue, true in Minicam and true in other systems and this is why Bob Harper claims that logic programming is not declarative. And there is, there is an argument to be made there and I understand what he means by that. So it, you know, logic programming doesn't live up to what you would hope it would do, because of this problem. And so this this is sort of the first Achilles heel. Now, I will show you just kind of one more cool thing with the pen, because I think or appendo because I think I think it's worth seeing. You know, we can also write things like this. Okay, so we can partially instantiate arguments. You know, first argument now is the list AB followed by anything else that we call acts. And that works. And I can run star and get all the results. So, so it's very flexible paradigm from that standpoint. However, you know, there are some trade offs. I can show you all sorts of other things I can show you quine generation and relational interpreters and, you know, kind of our standard things in parliament. I've also given a lot of talks on this stuff is some people may have already seen those so you know I, I don't know if you want me to do my usual dog and pony show or if you want to talk a little bit about things or if you know at what point you want to talk about ideas for, you know, maybe how to go go beyond some of the problems so what what do you think people would like to keep showing more I guess we've got one hour left. I'm still curious why it sucks. I don't know about the suckiness is it just that Achilles healers, or is there. I mean, no, no, there's definitely more. You know, so so part of okay so let's let's talk about some other issues. So like, I didn't even tell you the good parts in some sense right just kind of have to believe me but I've, you know, given lots of talks, you know, I made this talk called the most beautiful program ever made was like the success of clickbait titles. So if you want, you could watch that talk and I show kind of what happens when you write an interpreter for scheme in this style and all sorts of cool games you can play. Actually, let me just show you one thing about that one first okay just so that we can, you know, I'll do it quickly okay I'm not going to get into all the details. But I think this is probably worth seeing. As otherwise I think you might not appreciate kind of the, the full interesting part of it and the full kind of. This is going to be tricky to make it suck less. So, I just loaded an interpreter for scheme written in in mini penguin. I usually so find generation I'm not going to do with this and I'm just going to show one more example. So, so in scheme, like I'm using a shape scheme right now, there is the ability to do this code data isomorphism people call it right so you know I can do things like plus 34 that was what I was talking about before, evaluates the seven, but I can use this quote thing. And now that expression evaluates to the list plus 34. And I can manipulate that list just like a list I can take the first thing in it, and I get plus, for example, okay so it's not. It's not actually the procedure that knows how to do addition, it is like literally the symbol plus, but I can call this a valve function. I can take something like the quoted plus that just the symbol plus, and I can get my hands on to the procedure that it's associated with or I can eval the quoted plus 34 that list. So I can get my hands on the seven, just to make it a little more clear, like I can set define list to be quote plus 34. Okay, so I'm really not cheating here this is list. I can ask is less the list. Yes, it is. Okay, what's the length of it is really less the link three. I mean, eval the less, and I get back seven. Okay, so scheme has this eval mechanism has this quote mechanism and they, they sort of are getting in a way. Okay, so that's very, very cool and very powerful. I can also quote like lambda expressions and all that back less. So we're going to write our own eval in scheme or more of the point I've written in a val in scheme, and with other people. And so we are going to just like we call the pendo we're going to call eval in the relational setting so I can say run one, run one hue eval. And so the val in scheme takes an expression that evaluates it and gives you back a value here. We of course are playing this relational game so we don't talk about implicit help us instead we have some expression, and some value that's related to the expression so I can give an expression like little cons 34, and then q is the value of that expression, and I get the pair three dot four. Now, the interesting thing is the interpreter that I've written with other folks this eval thing. It supports a fair amount of scheme actually. And so it even supports like racket style pattern matching, and so forth so I can do things like write a lot right. So this is a recursive form, and I can write scheme a pen okay this isn't a pendo and many can run this is schema pen. I could say okay, we're back in the horrible functional setting where there's inputs and outputs so we have two inputs. I'm going to say if L is the empty list we're going to return s. Otherwise, we're going to cons the first thing in L on to the recursion of the rest of L. Let's see, I hope my little thing is not blocking. Okay, here we go. The rest of L to s. Okay, so that's, that is the definition of a pen, and then inside a lot rack I can make a call to a pen, like append ABC to DE. Okay, so this whole let wreck expression is just a scheme expression I can literally just take this and run that in scheme, and it will give me list ABC DE. But I'm running this within the context of a value that's the relational version of the evaluator, and I'm doing a run one q maybe I can, maybe this makes a little more clear, I can say I want to know what the value of that expression is in scheme. So value is my query variable, and I should get back the value of that expression, and turn off I get a list containing list ABC DE. Right, fine. But the part that's more interesting is that then I can say oh okay so I want ABC DE let's say to be my value. And now I can say, Well, huh, maybe I can put, you know, an expression e within this call to append so I'm appending some expression e to the list DE to get ABC DE I want to know what the e could be. And so okay, he could be the list that you back from the expression actually quote ABC. Now I could ask for a second expression. Now, maybe, you know, should there be anything else or is that it. Is there anything other than list ABC. Okay, so on. See, if I'm thinking you could maybe have a list of a container maybe and CD, like a list could include it. And maybe try to because of me go through that. Okay, so I don't know if this is good what you're getting at. But the key thing here is is actually an expression, not a list it's not a value is an expression. Okay, so we're getting back expressions that if you evaluate them give you the list ABC so like here's one. So this is a little hard to read maybe but if I just run this and scheme. Sure enough it gives you back the list ABC what this is saying is this is a lambda of that this like a var args lambda applied with zero arguments and it evaluates the list ABC. So basically, many Cameron is doing program synthesis to find expressions. And there are in flame many of those that when you evaluate them and give you back the list ABC. Now if I put a quote in front of that expression, then there's only the quoted list only quoted values in that case there's only the quoted list ABC. Okay, but it allows us to start playing playing program synthesis games and I can also do things like this I could say, well, you know, what about this car kinds of car L. How do I replace that with an expression. Okay. And sure enough, I can so now I'm actually starting to do real program synthesis is where I can start synthesizing parts of the definition of a pen from an input output example. And we push this a fair amount we've got a tool called Barlowman where we can synthesize all of a pen from input output examples that kind of thing. You know, that's kind of the, the most interesting example we have and now why does that suck. Okay, well that part doesn't suck that part's awesome. That's, that's the cool part of it. That was pretty awesome. Okay, so that part doesn't suck. But, okay, fine, if you think that doesn't suck, I guess I'll just show you one other thing and see if I can get away from showing this. See if this works. Yeah, maybe this works okay so I'll try one more time. See if I can. I don't know if you'll be all see this, like if I zoom in, can you see me zooming in or not. Yeah, you can. So, okay, so it gets like it zooms in like that. Oh, cool. Alright, well then this is probably easier to read them. Okay, so this is this Barlowman tool that I worked on because I got tired of trying to tell a Java programmer who asked me what I do. Like, you know, how do I explain to them it always like give them a three hour tutorial and lambda calculus or whatever. It's actually better to just kind of put an interface over what I was just showing you. So I created this Barlowman tool and then Greg Rosenblatt made it much faster. But the idea here is we can have a program skeleton. So, this is part of a program but it doesn't have a whole lot right we're just defining some procedure but you know we don't know what the name is that the comma a means that's like a logic variable. The other thing is quasi quoted, if you know, list. We don't know what how many arguments the function takes and we don't know what the body of the function is so we don't know a whole lot about it. But what I could do is start giving examples so you know and here's our buddy append again. And so I could say input now put example, you know a pen empty list empty list gives you the empty list. And here the system tries to fill in the values of those logic variables based on that relation interpreter I was showing you that behavior with optimizations or with heuristics to help make it faster. And so here it says okay the best guess is that the functions called a pen. And well it could be very attic it could take any number of arguments and it returns the empty list. Okay, it's a static function was like well that is true that that match my example so it's not wrong. It was like prove me wrong, you know, you only give me one example so that's fine so we can try giving another example. Maybe like list A to list B should be list AB. And it'll think a little bit and so now it's got a more complicated program it came up with. Now it takes two arguments we don't know what the arguments are called. And now it came up with conditions so if so if the second argument is the empty list return it otherwise return the list AB. And so it's sort of over specializing, but I can go in here and I can also edit things like I might say hey you know let's call the arguments like x and y. Okay. And now you can see that these arguments are now referred to as x and y. So I can, you know, do this sort of editing or I want. I could also if I wanted to do something a little a little more generic. So here you could see is over specializing to the A and B, because those are in the example. But if I want to, I can use the sort of abstract abstract values instead of the B. But in this case what I'll do is I'll use another trick which I'll say like I'll just do another example with two different concrete values. And so that gives me C and D. Okay, so at this point, you know it could come up with nested ifs or whatever but it's actually it finds more quickly this. This version that doesn't specialize on the exact symbols anymore so you're not seeing a and B there, but it hasn't figured out the whole thing because it hasn't figured out the recursion so try giving it one more test. And then you can see the BF to GH. Now, given them doing a zoom call, you know this might take a minute so I'm not, I'm not sure. But you can, you can kind of see what's going on here where we're doing input output based program synthesis. Okay, it came up with in what 11.6 seconds. Okay, so here's the got the recursive call so that is the correct definition of the time. Okay. Now, this already shows though a couple of problems. One, well, I don't I don't know if this the demo shows it, but there are problems latent here. Okay, so one problem is if I take that first base case and move it last. I don't know what's going to happen this might it might come back in, you know, 20 seconds or it might come back in a million years I don't I really don't know. Certainly if I reverse the order of these input output examples, the amount of time it takes is going to increase probably exponentially. So, there is a dependency because of that conjunction problem I was talking about. So this sort of programming is highly sensitive to the ordering in the conjunction, the ordering of those goals in the conjunction. And it's also sensitive to the ordering of the disjunctions as well although I think that's not as critical. So, so this is Is that, I mean, I guess this is where you're getting to a where there are problems. So, so this is, is this part of what you're trying to solve or this is just a limitation of how it was implemented. Okay, well, I think. Okay, so that's a good question. All right. I'm actually going to I'm not actually that curious about how long it will take a million years or a billion years to solve the sound I mean spectacular presentation. Yeah, come back in a million years we'll see if it made progress. Okay, so that's, I think that's the right question right so one of the things that has made many Cameron a success. There's a couple things. One is that the implementation is very small. And in particular, Jason Heeman and Dan Friedman have a paper on what they call microcanon and that's about 50 lines of scheme code. And now everyone who gets in the mini Cameron ends up implementing a microcanon right so it's like kind of like scheme everyone gets in the scheme implements as little scheme interpreter maybe a compiler. Right. So same with many can everyone gets in the mini Cameron ends up implementing one. And, you know that implementation ended up in the second edition of the reason schema. And that's great. And so that's kind of one of the things that many cameras known for and in particular there's this higher order representation of streams. So their procedures or functions that implement streams which are, you know, basically lazy potentially infinite lists, which are how we do all of this, you know, magic with, you know, how do you have the large tables well we actually have streams, the limit of application of course in the streams. And so all the search and all the laying is, is, is from streams. But while that is awesome that also is awesome. And the default search in many ways is pretty awesome this interleaving search strategy that this complete early when search that will because it off came up with. All of those things are awesome within certain contexts, and they are decidedly not awesome and other context so if your computation is finite. You shouldn't be using the sort of mechanism you should just be doing the prologue depth research uses much less memory there's much less overhead, it's much faster. If there if you are using a restricted set of many Cameron to do actual database operations that might correspond closer to SQL or data log, in which case you should use a totally different mechanism of solving, which is what a data log solver is. Okay, so the problem is, we have tied the semantics of the language. We've tied the declarative meaning the declarative semantics and the operational semantics and implementation together, completely. Which means that when Greg was trying to get Barlow into to work and be more efficient, and he sped it up like nine orders of magnitude in some cases. You know, he had to like hack it to death. And he had to, you know, muck around with the search and add special arguments saying this search this part of the search gets this much, you know, of a weight, you know, he's doing all this up by hand. Instead, there's this approach. It's an old idea. Okay, so we have been talking to a bunch of people who do many cameras often are interested in, you know, for whatever reason I think this idea is kind of in the air. It's like, okay, it is time for us to go back and realize that that while that little implementation got us to where we are. We are stuck because of it. And then several mistakes that we've made over the years, the first mistake I would say was in the first edition of the reason schema, we tied the implementation to scheme macros in a way where we didn't understand how hard that would be for non schemers are and experts to understand so we say hey, you're the little syntax rules hygienic macros. Well, if you're not a schemer. That is so scary. And even people like David Nolan who implemented core logic enclosure who is a brilliant programmer. He didn't understand it. He didn't understand until I wrote my dissertation and really went into all the details and heard this from many other people that, you know, scheme has a very different approach to syntactic abstraction than any other language I'm aware. And so we made this mistake that, oh, yeah, we just see here's some simple macros and they're only like five lines long. So people are going to be hacking macros left and right. No, no. No, there's like people just didn't understand the macros are like what in the world is this I don't know what to do. So what, what Jason and Dan did in the microcanner paper which I think was brilliant in hindsight for someone that's it, but they separated very clearly the macros from the procedural part. And as a result, people have ported microcan run to probably hundreds of languages at this point was just like a standard thing you do. So that was important to realize that we need to pull this apart. And by the way, you know, Dan and I thought, for the first edition, that run interface I was showing you, that's just like a macro. There's a, there's a run star and run are just very, very short macros. We figure everyone's going to roll their own interface. Here you go. Here's a macro, you know, have at it. No one changed it. No one's like, Oh, yeah, yeah, well, no, no, the only people I know who, you know, ever played around with the interface for people who had like, you know, implemented large versions of many cameras like, you know, the true experts were the only ones who hacked around. So we completely miss, misunderstood, we completely underestimated the difficulty people have. And I think another problem is we just in the name of simplicity, in the name of short code, we made this tremendous mistake where we tied the implementation, which is beautiful. I mean, every single care, every single character was thought over and bled over Dan spent, you know, more than the year trying to remove one line of code, and then Jason convinced them to put it back. So it was like, I mean, seriously, the simplification has been gone over and over with a fine toothed tone, comb, which is great. But the problem is, it optimized for succinctness and for people to be able to understand it in some sense, but it did not optimize for separating the declarative meaning from the implementation and the search and the heuristics. Okay, so the, the thing that I think we've realized that if you actually want to be clever about it about the search, you probably need a first order data structural representation of the search tree. You don't want procedures that are, you know, opaque objects, all you can do is call the procedure and get the next part of the stream. You want some sort of reify data structure, you can inspect the tree, you can manipulate the tree. You can write debuggers, you can write optimizers, program transformations, all sorts of things like that. You also probably need a compiler. At some point, you know, so there's only a certain amount you can do with an embedding, but even if you're going to stick with an embedding, you probably need a first order representation, you probably just need to break out as much as possible in the way that Nikolowski advocated in the 70s, Nikolowski said that algorithms equal logic plus control. Okay, so the problem is we've smashed the logic and control together completely. And even the implementation, even though the implementation is 50 lines of code, and in some sense is very easy for people to port it, it then takes them maybe, you know, 10 years to realize, oh, that was like a bad idea though. Now you have to unlearn everything to realize that, oh yeah, I actually shouldn't have done that. You know, it's fine to learn, but it really inhibits the optimizations you can make. And so that would be fair to say that they learned any can run, but did not learn the concepts by implementing. I think they learned a lot of the concepts. I mean, you know, I think if you go through either the reason schema, especially the second edition or the paper, the microcanon paper, like the microcanon paper is a tutorial reconstruction. And so you learn about why the interleaving search is important. You learn about different things, okay. In the book, we talk, you know, we walk through unification. So a lot of ways I think you do learn the concepts. However, what you're really learning is a specific implementation of concepts, as opposed to here is a more abstract way to clearly separate. You know, like, you know, you learned an implementation that chose to mix the logical specification with the operational semantics. That's what you learned. Okay, you learned and operational semantics, but there are many others that you could implement. And so I think the next stage is to try to take a step back from that and say, actually, we want to abstract over that. We want to give, you know, even if we want to keep exactly the same declarative summaries, maybe we want exactly the same semantics as mini-canon currently has, or the variance of mini-canon, because we have nominal logic and a whole bunch of other variants. Maybe we want to keep those the same. But we want to abstract over how those are implemented, how the search is implemented, different relations should be able to have different search. You should be able to have under the hood inspection of, you know, the groundless of terms in a way where as Kent Divig says, you can do an optimization, which is cheating without being caught. Okay, so you still want it to feel declarative, but that doesn't mean that you should throw away every opportunity to prone the search stream, for example, or to dynamically reorder conjunctions, which is what happens in parliament. The conjuncts are dynamically reordered. And that's one way that, you know, Greg was able to speed things up by a billion times in some cases. So that program I showed you by the way that took 10 seconds or 11 seconds, that would have taken at least a century to run with the original implementation. And so, yeah, so that's the thing. And we want to be able to have specialized solvers and all that and either program or annotations or, you know, some smarts underneath the hood to try to figure it out. So anyway, that's what I think is kind of the next phase of the implementation is separating those more. And there are, you know, prologue implementations like child prologue, I think you a decent job of this. But, you know, many cameras special, I think in its emphasis on relationality and some of the examples we can do. So I think, you know, I guess I guess here's one way to look at it. There are a lot of people who could run our implementation. And maybe they can play around that maybe they could kind of understand it, maybe they could port it to another list. In the second implement the second version of the book or with the reason, I mean, the micro can run. There are a whole bunch of people who are able to port a micro can run to a language of their choice and understand they were able to understand all that they're able to do little hacks. And that was great. I think it will raise the level of understanding it raise the number of people who are able to play around with things and experiment. But what we still haven't seen is large numbers of people who have the people who've implemented these systems, playing around with the optimizations playing around with different search strategies playing around with those sorts of things. You know, so, so each time we figure out a way, I think to raise people's understanding a little bit more. But now I think we've reached a plateau for a while, where unless you're Greg Rosenblatt or Michael Valentine, or an old leg or whatever, then you the sorts of hacks that are necessary to get good performance, or to, you know, implement alternative evaluation mechanism is just too hard right now. And so I think that's, that's the next challenge is to do that in such a way that large numbers of people can understand it and they'll be able to hack on it. And, you know, I think it's important to have an influx of new people and influx of new ideas is we're not supported by, you know, we're not verse we're not supported by fortnight, you know, we're not fortnight powered. You know, so we have to have to have to get people who are interested. We are weirdo powered right. You know, we need to get some weirdo nerds who are like, Oh yeah, this is cool. But that's, that's what I think is sort of the next thing we need to figure out is how to how to separate the declarative semantics from the operational semantics and then have a way for people to be able to explore these heuristics and different search strategies in such a way that we kind of kind of get unstuck so that you don't have to be a Greg to create a barlerman like creating a barlerman right now is seen sort of the way that mini Cameron was seen when we still have the macros that are like, you have to be some genius to create barlerman. But it should be the point where like everyone creates a barlerman who's interested in it's like it should be just like a standard exercise. It's like, Oh yeah, you know, here are a few optimizations and, you know, you know, just like there's many sat, you know, you can create a sat solver and, you know, a small number of lines of code that has a few of the, the standard techniques are known to work well. You know, there's no reason you shouldn't be able to create like a barlerman. You know, if, if you're able to implement a micro Cameron, you know, the difference between that and implementing a barlerman should actually be small. Right now it's not. So, anyway, that's what I'm thinking I don't have anything concrete to show you but you know I guess part of this is I was using this talk as a forcing function to force me to start thinking hard about okay what is it that we actually need effect. It wasn't until this rant that I, it's like, actually that sounds reasonable. Maybe that's a good thing to do. Maybe, maybe I need to talk to Dan about doing a new book, third edition. So, anyway, so that's kind of what sucks about many Kendrick. But hopefully you can do a version that sucks less questions. So what you just talked about it, it sounds like a very high level the classic problem of, you know, build one implementation that you're going to throw away. In the sense that like you built an inflation, you know, it does stuff right. And now it sounds like you're thinking about, well we should throw that one away, right, and build a new one, because we learned a whole bunch of stuff with that. Is that a fair synthesis or did I miss something. Well, I mean I guess we've been doing this for about 20 years now and we've thrown away an unbelievable number of these. And that's a different problem which also is a problem by the way, the problem isn't so much that we're throwing away a bunch of things. I mean, that's good or bad. Okay, it's more like we have a whole bunch of research prototypes, exploring things like, you know, higher order unification, or our pattern matching or nominal unification or constraint logic programming over finite domains or a whole bunch of other things. And each one is his own kind of artifact that someone implemented or a couple people implemented at one period of time, using one version of the implementation technology in one programming language, and they don't compose right so we have like 30 mini components, all of which are interesting, but we don't have what I call big Cameron so that was an another dimension that we can might go in is to have big Cameron which is okay let's figure out how to put all of these features into one system. Now, there's some tension here which is similar to the tension in the scheme community about standardization. I remember going through the R seven, you know, I was I was a grad student when our seven, I'm sorry, what our six, when our six started our six RS, our five RS was the scheme standard it was pretty small. Even then some people complain it was too big, but our six. There was the need for new standard it was felt, because no one could run code and different scheme implementations. The process, let's say wasn't very smooth. And in fact, it was so not smooth that our seven was almost a reaction or was a reaction to it where they said okay the problem is, there are two parts of scheme. There's like the really tiny jewel like feature that everyone loves right and it's great for ideas and hacking and that's like micro kingdom. The tiny little thing that you can hack anyone can implement. And then there is like, hey, people actually want to use this for real. Like they want to use it like they might use closure or commonness or something. And for that, you know, the tiny 50 line implementation maybe doesn't work so well. Right. Maybe you want some libraries, you know, something like that. So, in scheme that tension is felt so severely that for our seven they actually broke it the standard into two versions a small standard and a big standard and the small standard was finished. I don't know the big standard is still going. I don't know if it'll ever finish. Okay. I mean, seriously, it's been going on for a long time. And so, you know this I think there's that same tension and many can or because it is a teaching language is started as a teaching language and language for hacking up implementations very easily so you can explore just like scheme. But there are also implementations like core logic enclosure where a company that was bought out for probably on the order of a billion dollars. Use that technology so you know what should it be should it be something, you know, very pragmatic that maybe has all sorts of features and optimizations or should it be this like tiny thing that you could teach any undergraduate that you can hack up a version in an afternoon. I don't know, I guess we're trying to do a little of both. So, you know, there's this big camera in which is kind of the opposite of what I just was talking about because that's not something you're going to have an undergrad do or hobbyist do and big cameras like oh yeah, take 30 different implementations and shove them together well that's not an afternoon project. But then there's also this kind of, you know, trying to get more people into the fold and so from that standpoint, I think more in terms of books and papers like so, you know, we have two editions of the MIT press book. They were 13 years apart. And each version had an implementation that had some success but the implementations were quite different. And I can imagine, say a third version that would be different still that would try to separate these concerns more. But yeah, I mean I guess part of it also there's this what I'll call the Java phenomenon, you know, okay so I'm not a fan of Java. But Java did do something that was really important which has got people to accept garbage collection. That was critical. And if that was the only thing, and also got people to accept virtual machines right so those are two things. Those are all old ideas, but they weren't really accepted as things you could use an industry for serious programming that was like I mean I talked to some C++ program or the garbage collection. And I could do less, you know, that was a reaction and now it's like oh well if you're not doing garbage collection you should be doing something more sophisticated, not less sophisticated you should be doing like rust and borrow checking and whatever right you shouldn't shouldn't just like it's free for all. So that was a big change and I think it just takes time for people to accept ideas and learn new ideas and techniques and industry has been very slow to adapt. I think it's just true in general, you know, these are hard things like logic programming has a I think well deserved reputation as being very hard to understand. So over time, we've tried to get more and more people to understand more deeply some of the core ideas but every time there's still like a limit as to, you know, to what people what we can under what we can explain first of all we're learning how to explain things to people over time, and also to kind of what are the ideas and, you know, this group. When I started grad school this sort of group didn't exist. There weren't functional programming groups when I started grad school. And when I was working in industry, you know, I had explained to people what functional programming was. They're like, what is that. I've heard of that. I think I heard that the thing what is that it's like when you use functions. What is that. Right. I had explained to me as like now it's like, you know, jobs got lambda. Right. So the other the landscape has also changed. So, some of these ideas, you know, the abstraction has increased and now like generators those are accepted. Now we have verse. Okay, which is some flavor. I don't know. I don't know what to call it. I mean they're saying it's a functional logic language, maybe, you know, it's not what I consider to functional logic language but maybe they will define the term, but it's like very similar to icon in a lot of ways and so it has has some very interesting ideas in it. So in a world where there are a bunch of developers programming fortnight and verse. Some of these ideas just might be easier to get past like I think over time we've gotten more abstract so so I don't know if it's a second system effect. I think it's a little different than that it's more like every once in a while, you know, we feel like we're plateauing either in our explanations to people, or people coming into the community or our own ability to make progress and I feel like right now we're at a plateau where we were making some progress and now I feel like we're kind of stuck and we're trying to figure out well why are we stuck where we stuck. So that's, I think the heart of the issue is that we had this one implementation strategy that worked really well for certain classes of problems, but it needs a lot of work to make it work better for other things and we're just like the implementation that was wildly successful, kind of locked us in. Do you ever see a case where let's take the case of the functional programming concepts, sort of merging into the object oriented languages, like, you know lamb is going into a job. Do you ever see a case where it just sort of becomes part of the standard library in some of the more mainstream instead of instead of being a separate thing and just kind of are you talking about functional programming and mainstream like languages in particular or just more abstract things or logic programming or what. Yeah, relational. Well, you know, logic programming has been around for a fairly long time, you know I'd say certainly since the early 70s. You know it had this kind of weird moment in a way in the 1980s with the Japanese fifth generation project, where the Japanese government and this industry consortium sort of decided to standardize on prologue for a while. And they did some really interesting work, and you know they had languages other than prologue. What I think was in Japanese, and a lot of it sort of didn't escape Japan and then the funding ran out from the government and the project kind of went away. And there was like this kind of AI winner sort of similar to the list machines right what happened the list machines in the 1980s in the US. You know the same thing kind of happened with prologue and logic programming in Japan. There was some hardware, and you know a lot of money, and a lot of high expectations and then people say well that really, really mix meet those expectations so there was a time where in Japan in the 1980s like you know anyone in college learning programming was learning prologue, and it was influential and then it kind of went away. And there hasn't been a follow up moment where, you know with functional programming, I feel like functional programming ideas have kind of one in that, you know it's not like object oriented programming is gone. But people I think accept the certainly language designers accept that things like immutability are useful, or they can be useful and ideas of functions as first class can be useful so you'll you're seeing these languages change. What I have seen is that data log seems to have been getting a lot of traction. Maybe answer set program will start getting traction as well so you know it's part of the problem with with logic programming is that you're always skating on the edge of what's expressible in your subset of all of logic. It's easy to enter something like higher order unification where every problem now becomes undecidable because you have to unify you have to say equivalence of lambda terms like you know that type of thing you can't know it's like undecidable problem. So in logic programming a lot of the trick is to try to figure out well what's the restricted part of the language you can use like all in shivers says always use the the most restrictive language you can get away with. You can always use the least expressive language you can get away with you what you want to use, you know, because you have a lot of leverage over the most restrictive language that's why use regular expressions instead of writing an arbitrary Java program when it comes to that. And so data log seems to be to have made a big impact recently so at least a subset of logic programming. And of course sql can be seen as a type of logic programming obviously that's been very successful. I think that things like answer set programming will probably get become more and more more popular because like satin SMT solvers seem to be becoming important tools, more widely. As far as kind of more prology mini Cameron relational stuff. I think over time that'll happen. You know you see things like generators in languages now. And there's verse which is claiming to be a functional logic language. I think it is a functional logic language it's just very very restrictive right now, but maybe the same thing will happen to it that happened to Java where the first versions of Java actually weren't very abstract. Over time the language became more abstract so you know, I guess some of my issue with looking at verse is that it seems like there are a bunch of places where they could be more abstract and embrace the logic programming part of it language more. But right now they're not doing that maybe for performance reasons. But I don't think there's anything in language design. I think we need to keep them in the future from relaxing some of those restrictions and becoming more logic programming. And so if verse ends up really catching on and over time if it becomes more, more relational, then I wouldn't be surprised if people copy that. But right now I don't see a whole bunch of these features showing up in most languages verse. You know that's why I get interested in verse versus like the first language I've seen that was supported by a major company that claimed to have logic programming in its paradigm since the 1980s I think. That's the RSE. The RSE. Yeah, that's from. Yeah, that's from Epic Games, people do Fortnite and all that and so you know, and it has to interoperate with C++ so there are some pretty strict demands on what they can do. And so this is, you know, this is part of the metaverse and they want to be able to have strong reasoning and strong guarantees about what sort of things are allowable that you probably couldn't get with C++ directly. And so they have this interesting rewrite based semantics formal semantics are based on rewriting. They had this kind of weird confluence proof, like a church roster type confluence proof where they have to have a modified notion of confluence but that was like the big innovation from a church standpoint, in terms of the formal stuff like at Papel in Boston. They presented like last year they presented the semantics that have this rewriting system so that's kind of cool like they have this, you know, combination of functional logic language of rewrite semantics that they can prove confluent for some definition of confluent that's tied into a major ecosystem and is sponsored by a multi billion dollar company and, you know, the, the, the head of the company, you know, was the one who wanted the language so you know this is this is very different this is you'll see this very often. So maybe this will be a language that has a big influence over time I think it's too early to tell. But, you know, if it becomes wildly successful and, you know, or even if it doesn't it's like high enough profile that language geeks might study the lessons very closely and so you might start seeing differences in other languages. I will say that I think the language design is very conservative in some ways compared to, you know, like prologue or many can read that will the pure subsets. And I think the reason is, you know, they want to make sure that, first of all, they can deal with the effects of mutation and see plus plus, okay, and also for performance reasons so I think they've been kind of conservative but I wouldn't be surprised if over time they relax or the add alternative operators. Right now they have a list based semantics, for example, for a lot of their operations I think they could go to a set base semantics and in some cases they do left to right evaluation of everything. I think they could go to a nondeterministic evaluation in some cases so I don't know. I mean, what we'll see what they end up wanting but it is, you know, the first time I've seen functional logic language touted in any commercial prospects even even if you disagree with what functional logic programming. So I guess, you know, it also reminds me a little bit of looking at it. I had a little reaction that I had my first saw closure which was, Oh, great a new list I'm so excited let's see. Let's see about the tail recursion. Oh, no tail recursion. Let's see about the hygienic macro no hygienic macros and I'm like, put it back on the shelf. Right, but that was that wasn't a fair reaction because, you know, what rich was trying to do was different. He was trying to live in this ecosystem and so, you know, my reaction of verse was excitement followed by extreme disappointment. But I think it's sort of like my reaction to closure where it's like, okay, they are trying to solve a certain problem in a certain ecosystem a certain constraints that they can change. And in that standpoint from that standpoint maybe it makes perfect sense. And it also may have the longer term effect that people start looking at constraint logic programming or sorry functional logic programming constraint logic programming more seriously again so over time it might be kind of the Java moment where it's like well Java got a couple really important ideas into people's minds. And, you know, now you don't have to make the argument assuming versus excess. To make the argument well functional logic programming will never be efficient was like well fortnite's using it. So shut up. Right, I mean that's that's the thing that Java gave you is like well garbage collection too slow is like well you're using that website that's using this written in Java. You know, so I have to say, I'm quite upset that like, you've made it so I can't yell at my children to stop playing fortnite. Well, you can tell it to stop playing it but they now the need to start programming it. So, when you're saying that like metamorphic type thing. What does that exactly mean does that mean that we got to have a communication like how much communication is happening in both. Well, okay, so I should be careful here because I also don't. I don't understand all the details you should watch you know, I mean there was a talk by both Simon Payton Jones and epic games. I'm totally blocking Tim, Tim Sweeney, Tim Sweeney and Simon Payton Jones gave a talk that I saw at Papel last January in Boston, where they talk about this in some detail. And my understanding is that it's already being used versus already being used with fortnite in conjunction with C++ that's my understanding okay I don't quote me on this because I haven't used it in anger. And then the other part was that longer term, the vision is metaverse based, you know, like Neil Steven, Neil Stevenson, Snow Crash, you know metaverse. Everyone wants to have their metaverse now Apple wants to have something kind of like a metaverse and you know meta does and obviously you know everyone else so it's something like that we're obviously there's going to be some big economy and it's probably going to have, you know, virtual reality and games and all that. And what Tim Sweeney wants is a system where you can have a million concurrent people on the system. And they're communicating and they're trading and there's economies and all that and people are trying to, you know, rip each other off and, you know, rip off the system and do all the things that people do in big systems with economies. And the system would be resistant to that because we are not going to have the same set of bugs that you would have if you wrote a big system like that and C++ where it's whack-a-mole instead, you have this well grounded semantics and the rewrite semantics and confluence proofs and all that and you could do formal verification and, you know, all sorts of things that would be sophisticated so that the underlying system would be rock solid and you could have guarantees. That's my understanding of the long term vision. I don't know how that really fits in today with what has been deployed but I have heard that there are people right now programming in verse with Fortnite. I just don't know the details then. I thought about diving into it. The other language that's interesting I think is the language icon, ICO in, when verse came out, you know, I expected functional logic, I expected it to look like the language curry, if you're familiar with the curry language, or maybe like, you know, mercury, the mercury language. Those are two languages I think of as functional logic, and to me verse looks very different. It looks closer in some ways to icon. Icon has a very interesting model. So, I think some of the icon ideas getting into, you know, people's minds. And as far as I can tell, like the verse people didn't really think of icon when they were designing the language and then people pointed out to them actually, you know, a lot of these ideas look very similar to icon. So, I think icon is a language full of interesting ideas. I recommend if you're a language geek to check out icon and then take a look at icon, take a look at verse and see if they don't look very similar to you. But yeah, I can't speak as to exactly what the long term Epic strategy is. I mean, Epic seems like they're in lawsuits every two seconds also and then there's like some big investment. Yeah, who invested in them? Someone wants to invest like a gazillion dollars? Like someone's going to invest a billion or something? I don't know. They're in the news a lot, but I don't actually care about that. I care about the PL stuff. It's like, I just want Fortnite, you know, inspired logic programming popularity. So every time you get like, you know, I don't know, DLC or something, you know, maybe, maybe that's one step forward to having many camera and, you know, take off. So I want to, I'm looking at the time. I want to thank you. And then I want to stop the recording and we can keep talking for as long as people talk. But this was excellent and kind of mind blowing for me. Some other people here. Yes, this is this is really mind blowing. So thank you so much. This was this was a ton of fun. Thank you. It was fun for me too. Thank you very much. If you stop the recording, we can ask our other questions. Okay. Or ask other questions, guys, to show the, you know, your coaches, the folks can stop the recording.