 Hello. So I work, as Nourish mentioned, I work at this organization called MIRI. We do research into AI safety and what that would even mean. A lot of our focus is on generally trying to figure out ways that as we build smarter and smarter systems that we somehow wind up in the upper end of the distribution of the ways that this can go. It's a pretty broad mandate. But in general, you don't really want to have the moment that you start thinking about safety for these sorts of things to be like after you've created the problem. You don't want to be kind of in the situation with Mickey enchanting the broom to turn around and fill the well in Vantage or whatever in the Sorcerer's Apprentice kit and then having to figure out after you've created the problem how to put the genie back in the box or in the bottle. And in the pursuit of that, we publish a bunch of academic papers on ways to enable agents to cooperate once they cross a certain complexity threshold or to figure out what it even means to talk about. If I give you a very, very long number and I ask you, as a Bayesian, what's the probability that that's prime, you should be able to answer. With instant confidence, zero or one, there's no intermediate probability. It's true or it's false. But as an agent that actually has to reason and takes time to work through your thoughts, there's a whole process there that has sort of swept under the rug with sort of naive Bayesian reasoning. So Nate Soares, who's our Executive Director right there, that actually is handwriting. It really offends me how clean his writing is. Anyways, to this end, we run workshops where we invite folks out to the Bay Area and try and introduce them to topics in the air, safety space, get them interested in working on this. Because at this point in time, you have something like 300,000 people meaningfully working on advancing capabilities in AI and I would say something like 50 people meaningfully working on safety and I didn't miss a decimal place. And so maybe that seems a disproportional response. So a lot of my stuff these days is written sitting on Twitch. So if you look on Twitch TV for my name, you'll find many, many eight-hour long programming sessions of me just sitting there live coding a lot of the kind of stuff that we're going to talk about today. So with random horrible paper sketches and me sitting there trying to explain the complexity in this case, I think of Dancing Links. But I would personally be much happier with the idea of that AI safety is a tractable problem if we flashed all the way back to the 70s and 80s and we went to a Marvin Minsky-style research program of pushing around a bunch of symbols in some kind of Lisp-like thing, if that had worked. If that had been the thing that got us all of our crazy agents inspecting cat pictures and abilities to fit 3D models to stuff, I would have a much greater level of confidence in how to... that we could actually reason about the behavior of these systems. So that's sort of where I'm at right now is trying to figure out ways that I could make functional programming, logic programming, formal methods, etc. scale enough to be part of the solution. So everything that I... like my personal research agenda is a sort of contingent research agenda. Hey look, you'll be able to figure out how to specify these goals in ways that don't go wrong. But like if you... there's this notion that some of you who've like grabbed me in the hallway and I've talked about, which is something like... there's this idea of something called Goodheart's Law. Goodheart's Law is this idea that you get what you measure like if you have a metric that turns out to be correlated with what you're looking for but isn't what you're looking for and then you start using that metric to judge people like you measure programmers by lines of code because it happened to be correlated with productivity for a while then once you start measuring on that metric you cease to get productivity out of lines of code, right? You get Java. And so if you look at AI and agents in this space you have this sort of... the things that we can measure as a utility function are proxies for what we actually want. Like we want... this is the thing that I know how to write down that's nice and concise and is cleanly mathematical, right? But it doesn't take into account all that messy externalities and stuff like that that we should also consider. So the fact that Goodheart's Law kicks in is basically saying that almost all AI once you get past like a certain complexity threshold and it gets very good at zooming in on exactly what you've asked for is going to start optimizing in that same sort of way like excessively towards this target that is what you can ask for which is a proxy for what you want and you hope that the dot product with the direction you want it to go is sufficiently positive. So I'm not the only person with a language focus in this space. Dmitrius is also running around, I think he took a job recently at DeepMind he published a paper with a guy named Alexi Radul whose name will come up later on in this talk and a couple of other folks including one of the guys who started PyTorch on this programming language project called DEX fairly recently and their focus is on like big array based calculations in a Haskell like language so they add like these dimension types so they can sit there and talk about like how do I run these big batch processes over tensor like data sets and produce LLVM like things. So they care about scaling in that direction and if I've given talks in the past about how to do SIMD evaluation of like Haskell style code as ways for me to also try and make functional programming scale because we've scaled down to like the core level but we haven't really scaled down to use your SIMD and your CPU we haven't scaled out to your GPUs and your TPUs very effectively and so a lot of my research interest is how do I like if there's a compute bound that is kind of keeping functional programming from being relevant in the space how do I crush that if there's a complexity bound in terms of like our ecosystem there's a bunch of decisions that were made as part of like the design of Haskell back in the 80s and 90s that really have maybe put a sort of artificial complexity cap on the kinds of thoughts we can think in this language or in the way our package management ecosystem builds on top of itself so there's a bunch of directions when I say I want functional programming to scale when I imply that okay so now we can actually get to what this Guanshi thing is in this current little sub goal so one of the other components that I have is trying to make proof effort scale I would like to be able to produce code with more than one or two people working on large bodies of formal method like code and in this space there is it's not really focused on proof but it's a system I'm going to try and repurpose a bit or the ideas of a system I'm going to try and repurpose so there's this logic programming framework in scheme called mini canren so canren is let's see how I can describe canren so how many people here know what prolog is okay good so canren is basically like a prolog written as a little embedded domain specific language a library in scheme okay I'm not interested in writing scheme per se but so Dan Friedman who is the gentleman in the hat on the left there has written a bunch of these books like the little schemer, the reasoned schemer, the little typer they're rather popular actually I think his little schemer or the little lisper is like the computer science textbook that's been in longest continuous publication or something like that and has some ridiculous number of random students who've read the thing and Will Bird who's the gentleman sitting next to him was his student co-author on a lot of the stuff for working with logic programming so they have been building the system called mini canren or canren for a while but their goals are not my goals in a lot of ways like if you can look at Dan's work as sort of trying to make sure that everything he builds can be taught at a very introductory level to anybody who can like parse through the source code right and like it's very much designed so that almost everything he builds can be taught kind of at the 15 year old level and it's great it's like incredibly accessible those of you who know my work might realize that this is not my particular aesthetic I on the other hand am willing to throw everything that I know about math and computer science and everything at trying to make my system scale okay so what I'm interested in doing is taking something like a canren style foundation and throwing everything that I know about SMT solvers and SAT solving and all of these other like domain specific optimizations and like the stuff about propagators that I'll talk about tomorrow and I'll talk a bit about today at trying to make logic programming scale okay so as a bit of an introduction to terminology here so canren which I'm butchering the pronunciation of and I will butcher your pronunciations of all four in words throughout this entire talk I apologize so canren is like the Japanese word for relation or linking okay and so it was named I believe because Oleg Kiselyov was learning Japanese at the time he's now I think teaching in Kyoto or something like that so it worked out for him last year I spent a bunch of time learning Mandarin or starting to and I was looking around for a name for a logic programming framework of my own and so the word in Mandarin for this sort of relationship like thing with a slightly different focus is guanxi so guanxi has different connotations than the very simple word canren in Japanese in Chinese it's like the network of relationships that you have to pay attention to to get anything done in business and all this other you pull on guanxi to ask for a favor and give face and do all this kind of stuff so as a dig at canren the name guanxi is hey look they're not paying attention to all these other externalities as a dig at myself it's also kind of viewed from western eyes as the source of what westerners would consider corruption in trying to deal with China and like why you can't get a foot in the door trying to do business in China you just don't have any guanxi to pull on right and so this like this over complex core will probably corrupt my answers so that's my my dig at myself so why do I care about canren the system that I'm kind of looking to explore is this thing called barlamin and barlamin exploits the idea that what will was able to do was build a scheme interpreter so you wrote an entire scheme interpreter as a logic program in canren and with scheme as a logic a scheme interpreter as a logic program it's a relation so one of the things you can do with prolog is if I give you like the I want to give you the result of appending x's to y's I can run that program backwards with all the lists that I could have possibly appended to get this output so you can use functions as relations in something like prolog so here having a scheme interpreter as a relation means that what I can do is I can say hey look I have a program and it's just a whole I don't know what it's going to be but here's some unit tests I would like it to pass I would like to do test driven development in a way that matters so in about nine seconds it was able to say from these three examples let's synthesize what I can think at which is magic to me and they're doing this in scheme where they don't have any freaking types I think they run into some problems that they go to do like reverse like let's be honest there's some problems here right but what they consider a problem is that it takes more than a couple of minutes and if you give me like helper functions here like this might be a little bit difficult to see and I apologize there's a definition of reduced right which is like a fold R so if I give you fold R and I ask you to define list concat and I expect list concat to be something a lambda that takes two arguments and then ask you to fill in the hole it will use the helper function because it writes for shorter programs so if I give you programs I would like to synthesize little problems, goals that I would like to address and they're short enough then something like this approach is for program synthesis to rather drastically accelerate the developer workflow and so this is where my interest comes in now I'm not interested in synthesizing scheme programs I'm interested in synthesizing like Haskell programs or dependently type programs or things with crazily complicated type systems and it turns out that the more interesting the type system the more programs I can throw away earlier in this process as a bit of terminology is this thing called Barlemin so Barlemin is let's take that scheme interpreter and put this little GUI thing and it's based on I think there's a character named Barlemin Butterbur in Lord of the Rings or I don't remember if it was in yeah so he thinks less than he talks and slower but he can see through a brick wall in time so he's dumb as a post but he gets the job done eventually so that's at least the name where Barlemin comes from if you want to look it up and so Will Bird has some talks you can find on the internet but I'm sort of interested in pushing this idea further so right after Will presented the first example of a scheme interpreter written as a relationship in Kenran or Mini Kenran Gershom Bazerman the gentleman up there on the upper left had just given a talk on doing dependent type checking in scheme he wrote a little dependent type checker and like taught it at Lisp NYC okay so he had this dependent type checker written in scheme but we have this ability to run scheme programs that have holes in them now so what I would like to do is take this dependent type checker put little holes in my problem and see if I can use it for type inference like if I put holes in the types can I use it for type inference if I put holes in the terms and the answer turned out to be yes it turned out to be rather glacially slow but like the fact that this worked out of the box like ten minutes after the talk really woke me up and made me think about this a lot okay so that's where Gershom kind of comes into the story I'm not interested in synthesizing scheme programs but like if I start with like a Hindley Milner or a Haskell-like type system what I can do is I can let the types throw away any syntax tree that could not possibly complete to a well type term or where I can like have partial types and partial terms and have them throw away so like if you're searching through a space of 2 to the 120th programs it helps to be able to throw away 2 to the 70th, 2 to the 80th of them at every turn so having the Hindley Milner style types helps a lot how many people here are familiar with the idea of liquid Haskell has this come across okay that's mattering of folks so liquid Haskell is this idea that what you want to what they did was they said let's take Haskell code and put little side conditions like what's the weakest pre-condition for this function or what's the pre-condition for this function what's the post-condition it'll satisfy and then they like write these extra predicates down in Haskell basically and they feed them off to Z3 like this SMT solver to solve okay so Nadia Polakarpova who's working on a project called Synquid liquid Haskell types these little side condition pre and post conditions for functions and uses them to help guide the process of searching so you've got Hindley Milner style types like Haskell style you know the for all like the nice polymorphism and type inference works and everybody's happy and then you also have these more interesting and I have predicates you know I'm gonna I'm gonna give you a integer that's greater than equal to five and I'm gonna give you a list that is sorted as a result or something like that I can write down my conditions and this list will be a permutation of the input list so Nadia's work is on using liquid types to help guide program synthesis and she also uses Z3 as sort of her only tactic finally down here I have this guy Edwin Brady he wrote this little language called Idris there's a book on programming with Idris that's like I've been making the rounds for a little while here Idris is another dependent type checker it's focused mostly on programming with dependent types rather than trying to be a theorem prover it can be a theorem prover but it's mostly a programming language that happens to have dependent types and he's been doing work lately on what he calls Idris2 or Blodwin it's a working name for this thing and while he's been working on Blodwin he's been trying to incorporate something we call quantified type theory which is an idea from Connor McBride where what you can do is when I give you a variable I can tell you how many times you're allowed to use the variable in the program so I can tell you 0 or 1 you're going to use this variable exactly once so you can use linear types if that term has crossed any of your ears to help restrict the space of possible programs so the idea is use linear types when possible you can rule out classes of failures like you can't accidentally forget to close a file because you have to do something with the resource you have to use it exactly once and so you thread it through opening the file then creates this chain of reasoning of like I'm going to pass the file handle through all the code that I want to run and then it runs to the close and a program that doesn't properly close its resources is just ill typed so that's the kind of thing that quantified type theory gives you so why do I care about this well again I'm searching through these huge problem spaces and something like Idris usually you sit there and you hit some risk breaking set of macros to try and figure out how to synthesize please case match on this term please do this and you sort of are your own tactic engine in a dependent type checking kind of setting that's like let's say you wanted to compute like the transpose on a matrix you have a vector of N a vector of M A's you know like and you want to turn that into a vector of M a vector of N A's if you just ask to the like Idris or something like that to just synthesize a program what it'll probably do is it'll figure out how to like get like a vector of N and then it'll just repeat it as a smear over the thing because it's the shortest program like synthesizing the next column in the transpose is actually needlessly complicated when you already have a row that has the right column that has the right shape so but turns out if you can put like a linear constraint that you have to use every element in the matrix exactly once turns out the shortest program will probably be the transpose so if I want to under specify my constraints and be able to very quickly punchily get a program that does useful work having these quantified types having these refinement types having Hindley Milner style types lets me throw away larger spaces of programs earlier in the process so what I'm interested in doing is trying to take the can rent idea and scale it a lot so when I talk to folks about like movie special effects or I work in the space like there's a notion that like a render farm is something like 10 to 40,000 machines when working on a film okay and we use 10,000 machines to try and minimize artists down time why is program or down time any less valuable right like especially in a space like a I safety where every extra body that we bring into work on this thing is actually a risk because capabilities are so close to safety research and so lots of capabilities have come out of folks who are nominally working on safety and what oops so like trying to maximize the productivity per developer in my particular research area is rather near and dear to my heart and so trying to figure out what is the appropriate amount of computing resources to give someone in the space so they can get ahead of the curve and like be there kind of before the rest of the industry finishes catching up right like what is the appropriate burn rate to actually be efficient in the space I don't have I don't have answers to all of these things okay so now I'm going to kind of poke at that that was sort of the non-technical intro for the most part so Nuresh now I'm going to actually go off the rails I apologize okay so Tanren uses this funny search called logic T which was designed by Oleg to Oleg Kiselyov is really rather well known in the Haskell community and in the scheme community he's ridiculously smart one of the things that he did was like when you're searching through the space of programs prologue is going to go depth first like through whatever set of goals you give it and the problem with depth first is like if it goes in the wrong direction and there's just like x equals f of x so it just goes f of f of f it just like keeps giving you f's it's never going to stop so the problem with a prologue style search for program synthesis if you go in the wrong direction you never stop and consider the other directions okay so a breadth first strategy is one way to do this but breadth first tends to take way too much memory logic T is this funny little search where the first search item kind of gets half of your attention and then the second search item gets like half of that attention and so you get this like geometric series worth a fall off at attention and therefore while you're processing things further along like if something further in the tail is sufficiently productive it'll pop up with an answer that the front item is only slowed down by a constant factor so this notion makes it so that like you have the sort of same asymptotic performance as depth first search but if the depth first search was going to diverge instead of diverging you only slow down by a factor of two for each time you would diverge so this can get very very glacially slow very fast but logic T is not depth first so it won't get lost in binaries but it's big and expensive you have to keep all these environments around so like you have a bunch of different searches that are at very different like portions of their search space they have like maps from variables to values there's no good way to throw away old values so you kind of leak memory over time with the Canron approach so like the longer you run a logic program the slower it gets kind of by force and this is not really compatible with like incremental SMT solvers like I want to use like an off the shelf SMT solver so a SAT solver basically just says hey I'm going to give you a bunch of a big Boolean formula please give me an assignment of like x equals true and y equals false or something like that that in the end makes this formula true that's what SAT solving is okay and SMT solving let you have more interesting properties than just Booleans so if that's the level of SMT solving you'll get something out of this so I want to take all that stuff and apply it here but I can't just use an off the shelf external SMT solver because what's happening is most of those let you like add extra conditions like I'm going to add an extra like assumption instead of like constraints on like a stack like basis and I can roll back on the same stack like basis but what's happening with the logic T-style search is like working half of its time over here and then it kind of gets bored and goes off and like works on something over here for a little while then it comes back and it works on something else for a little while and like the time that it's giving these the amount of attention it's giving these things is getting smaller and smaller and so all the time to set up and tear down that stack eats all of the benefit of having external incremental SMT solver so to avoid their maps there's this old paper I was really I thought it was super clever and like I could use this I can do use real references and support backtracking rather than having to pass around environments and then I found this rather old paper from and Peter Lunghoff which had everything that I was trying to do so I turn around and I use actual machine references even though I have non-determinism and I can get away with that by not doing the ADHDing around my problem domain so I have to do something that feels more like a depth-first search and the other thing is that it doesn't learn from its mistakes it goes down some blind alley and it doesn't find a solution there it doesn't generalize that to a life lesson to keep it out of similar problems in the future in SMT solving terms or SAT solving terms it does DPLL style search rather than CDCL so there's a new conflict-directed clause learning conflict-directed clause learning is the idea that when you go in some direction and you don't find an answer you have like some minimum set of variables that you like it would cause me to say X is true and Y is false would definitely blow up so the moment I ever see X is true in the first place I will definitely learn that Y is true or if I ever learn that Y is false I'll definitely learn that X is false it'll cause me instead of ever stepping into this situation even if I was going to do the assignments to my variables in different orders to just never find myself in the same subset where we've already proven that this goes horribly wrong and so since Kenren doesn't do that kind of like reasoning it brute forces its way through in places that it doesn't have to so I'm trying to avoid brute force wherever possible even if I'm willing to spin up 10,000 machines that's only going to give me 5 orders of magnitude and that's not enough as my problem space gets larger because the blow up is exponential so the glue that I'm using to tie all this stuff together is this idea of what's called a propagator so Jerry Sussman who wrote the SICP book that is like the source of the logo that is sitting here for the conference um his last PhD he's been chewing on this problem for like 30 years and he finally got a PhD student to work on it with him and that was this guy named Alexi Radul the the primary author on this little tech report this is a tech report out of MIT it was also the substance of Alexi's PhD thesis so he did his thesis with Jerry and so Sussman's idea with propagators is something like instead of having um I'll talk a great deal about propagators tomorrow so if you want to go deep dive on the technical portion of this um that's what I'll do then if someone's comfortable with all the math of it it's like these are monotone functions between joint simulatices that I can structure problems in this way like to push information that what do I know about variables and values around in such a way that doesn't matter the scheduling algorithm that I use to push this information around I'll yield a deterministic answer even though it's concurrent and heavily non-deterministic in the middle the final answer is deterministic and so what a lot of what I've done is take Alexi's work on propagators and try and bolt laws on it because Alexi's stuff is great it says hey look here's this huge space of problems that are all propagator problems but what it doesn't do is it doesn't ask sort of the second question of but we've also worked on those problems for 30 years and how do I take each one of the things that we've learned to make each one of them fast and transfer it to all the others like what do I need to tease out the mathematical structure of what's there in order to make all the other problem domains that we have along this way fast so it's like basically throwing all the problems that I've had that I wasn't smart enough to solve individually into a blender and then trying to solve them all at the same time so that's what the propagator glue is about so what I use is like if I have intervals like some kind of interval domain where I've got X is between 1 and 5 and Y is between 1 and 5 and I have the constraint that X is less than Y then I can establish using propagators that the X is between 1 and 4 and Y is between 2 and 5 just by using that constraint but then if I learn something about X then I can push that information around to tell me something more about Y so I use propagators to push extra information to like avoid guessing when I don't have to guess and you can encode the traditional SAT solving problems as propagator problems and all sorts of other domains so this turns out to subsume a huge cross section where all kinds of solvers can be encoded in the propagator vocabulary and having it as a common lingua franca means that what I can have as a solver that may not be the best solver at any individual subdomain but it can handle all of them so you can throw datalog and SAT solving and integer linear programming and all of these things and constraint logic programming and finite domain solving is a big composite problem that you have that emerges so that's why propagators act as glue for me the problem with the propagator network is that I put too much pressure on it because I'm doing too much work pushing information around like if I say X equals Y plus a constant like every time I learn exactly what X is I learn exactly what Y is but I could do better than that so there's this notion of um union find or disjoint set forest that pops up in most undergraduate computer science curriculums at some point you'll encounter this like it's the first time you ever see this like N alpha N bound it's like inverse acrobin complexity bound it grows super super slow it's one of the things that Robert Tarjan is super famous for but it turns out what I can do is I can modify union find to allow for a group action so instead of saying X equals Y I want to allow for like some group something like looks like the integers with addition where we have inverses for addition we have subtraction right or with multiplication with division we have inverses we have reciprocals um and what I want to do is I want to have some kind of group that's acting on my variables like X equals Y plus a constant as an example X equals not Y so I have the two element group with identity and not as the definitions with you know it's own self as it's own inverse right not dot not is id and it is my identity element um and if I modify the union find algorithms which are typically like you're allowed to um create elements sets you're allowed to union two sets together and you're allowed to ask if two elements are part of the same set this is the thing that you can do with union find if I modify this instead of just saying that X equals Y or X is in the same set as Y to say that X equals some group acting on Y X equals like Y plus a constant or X equals not Y or something like that then what I can do is reduce the pressure on the propagator network because I don't need to just say oh when you learn what X is you learn what Y is I just make them be the same thing there is no propagator pushing the value around it's just different perspectives on the same value like just like that's the idea of like using union find or unification modulo group action so like here I've got like X equals some action of some A on Y and Y some action of B of B on Y that means that Y is the action of like the inverse of A like you can use the fact that we have inverses in our group and so if I need to like look at how Z and X relate we can sit there and go through either one of these paths and so like this is like all we have to do is modify union find just a little bit and put the one of these group actions in it and this drastically reduces the amount of pressure on the propagator network which is the thing that lets me make all of these different domains cooperate so I think I mentioned briefly like integer addition, identity or not affine transformations with unit scale turn around to pop up all the place when I start playing with intervals and a couple years ago I gave a talk at Sculloworld on monoidal parsing and in that talk I used this notion of group actions in order to be able to like talk about how do I do like reparsing in the presence of small changes in a source file in sublinear time you make small changes in the source file I would like to rebuild the syntax tree but I don't want to pay linear time in the total amount of source code that I have but that means I don't even have enough time to build syntax trees that have absolute positions because building the syntax tree the syntax tree itself is sized linear in the in the amount of code that I've given you so I had to come up with a whole bunch of techniques to work with that and so everything that I was talking about earlier with group actions for for this everything that's in that talk turns out to be the glue for how do I take a propagator that's like listening at X and now it needs to start listening at Y but it's now shifted I have to shift all of those listeners in O1 time or I kind of blow up my asymptotic complexity there's a bunch of stuff that's gone into this each so basically at this point in time we're just way down in the weeds of like what are the kinds of like life lessons that I've been able to learn along the way and I apologize that this is going this is all like the rather technical side of things there's a paper that Atzi Vendorplug wrote with Oleg came up earlier in the talk Atzi wrote this paper on what he called reflection without remorse which has come up in conversation with a couple of folks here and it's how do you make ads that are efficient or how do you make a number of other structures that are more efficient than you would expect so there's a book by Chris Okasaki called purely functional data structures if you're interested in learning more about like how to make functional code like if you took a course on complexity theory like on amortized analysis or like data structure and algorithms at some point you might have learned about like amortized analysis of algorithms and when you move into the functional programming world everything that you learned about amortized analysis is wrong because you still have the old object lying around and so you can't do a bunch of cheap things to pay for one big expensive step in the future when I can make you do the big expensive step as many times as I want like I can make you rewind and go back to that previous value and like because it's still there immutability has kind of destroyed our ability to reason about complexity and so Okasaki's book gives us a whole bunch of data structures that kind of restore that by using laziness this is one of the reasons why I love Haskell is that Haskell is really the only language that I can point to that has taken laziness sufficiently seriously to be able to do asymptotic analysis about code that happens to involve immutable data structures and just wanting to have structures that are immutable is not a big ask it's a really powerful tool as you want to distribute your data and work on multiple cores so if you needed a reason to consider looking at Haskell it's really the only language I can think of that kind of takes that idea and runs with it and so I only have a couple more of these the other things that I've been playing with is there's this whole line of research on something called natural domain SMT so I said DPLL was sort of a dumb search strategy in the sense that it's just brute force like if I've got a bunch of things that X could be let's try them all then let's try all the things that Y could be and continue on what X was and we just keep going down all the branches and we don't learn any life lessons to keep us out of like isomorphic situations CDCL learns life lessons but it learns it in a way that's very peculiar to SAT it turns out that there's many domains in which you can learn those life lessons like directly in the abstract domain that you want to work with so we can take all the things that folks know about abstract interpretation in a very different area of computer science and apply it directly to trying to make these SMT solvers more efficient well I'm stealing all the things that make SMT solvers efficient to make logic programming more efficient so I'm trying to take type theory through the lens of SMT solving to make type theory faster so that's sort of the thing that's been driving me in this direction there's a bunch of other work here where I find ways to exploit the fact that I'm working with abstract interpretation but I'm not using it like a computer scientist or a type theorist usually uses abstract interpretation there's all sorts of these abstract interpretation domains we can do abstract interpretation an example of an abstract interpreter would be something like intervals instead of having a function that returns a number let me give you a function that gives you an interval worth of numbers I'll just work with this as the abstract domain that I want to work with then I'll just go down to the level where I need to work with this if I want to make a constraint I like to constraints in this sense where I want to make a constraint because I want something like two and three and four octagons allow me to say X is less than Y might be a constraint rather than having X and Y have constraints on them independently now I can have these little 45 degree cuts divided by four. So I can have like tiling and stuff like that fit into this space. So I believe Siddharth might be around. I don't know if he's here yet today. He's doing a session I think tomorrow afternoon but he's also got a whole bunch of stuff. I'm like using polyhedral loop optimization which is the latter couple of those. And I really need to hire that boy. Anyways, so there's a whole bunch of other results and they'll be on the slide. I don't think that they're readable at the scale. But if you want more, my research agenda in this direction, I've been exploring, I sit on irc.freenode.net on the Hash Hash Coda channel. Coda is the language project that this is all kind of nominally contributing towards. The Git repo for Guanxi for the logic programming stuff that I'm exploring right now is at GitHub at Guanxi. I have a Twitch stream. I have been very bad about Twitch streaming lately. I need to get back to it. I just moved to California and I have set up my camera but I set it up on the wrong side of my monitor and it's gonna be a day to reconfigure my desktop and I'm moaning and making excuses at this point. I just need to finish getting set up and I'll be back to Twitch streaming. So if you're interested in maybe a lower density version of this kind of talk of just like the pattern while I'm actively coding things, there's probably 30 or 40 hours worth of that on, or there's way more than that actually, like they're eight hour streams and there's like 20 of them. Anyway, there's a bunch of random, many hours of me just sitting there live coding Haskell. So if you wanna get a sense of like what it takes to climb inside of the head of somebody writing Haskell while they're doing it, like cause like, I don't know, it's like looking at a category theory diagram or something like that. The diagram itself is a dead thing but the act of watching someone draw one is rather interesting or like it's insightful into the process. I think the same thing with Haskell. Like if you look at a Haskell library, it's like, oh my God, who would've ever thought of that? And then you watch the flailing that actually goes into implementing it and maybe makes you feel a little better about yourself. So I do a lot of flailing on Twitch. So that is the thing I wanted to talk about today. Hey, look, logic programming is interesting. We can make it scale, let's do it. I'm trying to figure out all the things I need to steal out of the SMT community to make that go. It's an instrumental goal for me that I want to use logic programming as a constituent part of how to build these more interesting type theories but I think it's an important instrumental goal. And tomorrow I will dig into propagators in more detail. So that is all I have. Thank you very much. And I think we may have some time for questions, I don't know. I know, I left myself 15 seconds so I could get a question in. Can you hear me? Yep. Great talk. Thank you. So you talked about program synthesis so this is just an opinion I wanted from you. Sure. One line of thought is synthesizing programs from these complex logics like separation logic and even Adya Polikarpova has worked on that where they synthesize programs from separation logic and so on and so forth where you actually do it in a language independent way since you write your specifications in a higher order language, higher order logic and then you can synthesize programs from that. So have you come across this and what's your opinion and how does it stand in relation to what you're looking at with like type driven synthesis and things like that? So I don't know that this is necessary. I don't know that these are mutually exclusive. I think Nadia's work is basically her focus on SYNQUID, I think that comparatively like Canron spits out terms really, really fast, right? But it has like almost no constraints on it. And adding more and more of these constraints gets it down to like a reasonable scale. But I think kind of any kind of scaffolding that I can put around this, each one of these things is kind of like holding a different part of the elephant like down or whatever the, I'm making a very mad mixed metaphor. But I just need, I need to like latch down like, okay, there's no good programs over here. Stop looking or I don't expect one. I don't expect them over here. Stop looking in that direction. So each one of these like ways to constrain things either through like, you're just directly specifying it as a higher logic or using an SMT solver and using E pattern matching or E matching in order to do the synthesis. All of these are like different ways to kind of view the same trick. And I'm just trying to figure out which ones are fast. Again, great talk. Thank you. So I would like to ask a question about Karen. How is it different than Agda or Coq? How do you compare? What is Coda different or? So the method of it. So Gwanshi is a logic programming framework. So it's more like prologue than Agda or Coq. Coda itself, the language project that I'm heading towards is probably a talk in its own right. But it's more about how do I build like an extensional type theory. So it's a, which is more like new pro than Coq or Agda. And I don't, I'm not gonna have enough time to answer this question in band in the session. But like for like the much, for the most technical folks in the audience, it's an extensional type theory designed to make it so that I can sort of like gradually type between something like Haskell and something like the level of confidence, but in such a way that all of the side goals are being emitted in a way that they could be discharged like gradually in such a way like, hey, look, here's a million sub goals. Now let's take those million sub goals and sparm them across all these machines or do like take a bunch of grad students and throw them at my problems. I don't care if my tactic is bi-mechanical turk $20. You know, you, I just want answers to my problem or my sub goals, right? And so having the ability to try and improve like the fungibility, like the ability to convert money directly into proof. Like I don't care if I do that through doing a bunch of programs into this or I throw a bunch of grad students at my problems. Being able to throw both of them at the same subset of problems matters to me because then I can create a market. I guess they're in judge the value of proof like in dollars rather than computer dollars. Okay, I think yes. I will be around all day and for the next couple of days.