 Okay, so first off, I swear I did not look at any slides in advance. You're not going to believe me because you're basically going to get a repeat of her talk. Or I'm going to argue we should give her everything except for functions, basically. What qualifies me to talk about the next great programming language? Absolutely nothing. Quite seriously, I'm not even a programming language designer like Andy. So absolutely no qualifications here except they have been writing code for a very long time. And from the perspective of someone who has written 3 million lines of really bad code and maybe a few hundred lines of good code, I do have some opinions about what I want from a programming language. So I'm going to take it as a given that we want static typing and purity, which is I guess the one difference from Andy's talk. And I'm going to look at the programming languages, the leading static type programming languages today. And we have ML and its variants, SML and so forth. We have Haskell and we have OCaml. And one of the really interesting things about all these languages is that they are decades old. They were really old languages. Technically, OCaml was invented in 1996, but if you consider it's basically just an extension to OCaml, which is an extension to SML, which is an extension of ML, we'll solve these programming languages really to go back decades. And Haskell is, in my opinion, probably the best static and typed programming language getting stuff done. And it also happens to be my personal favorite programming language. So I'm going to take the opportunity to pick on it a bit. It turns out that despite the fact that I love Haskell, there is no such thing as Haskell. And that's sort of because the last definition of the language is too crippled for people to write programs in. So we have to use extensions and GHC supplies us with, I think, around 70 different extensions. And these can be used in almost any possible combination. So there's no one definition of the language Haskell. In fact, I actually had to look up how to pronounce this number because I wasn't sure there were 70 zeros. There are 144 per million flavors of Haskell. I know some imply others. Oh, okay, so it's not quite that bad. But fundamentally, this isn't a language. This is a very large family of languages. And some of these extensions are not so great. Let's choose incoherent instances and combine that with undecidable instances. Maybe we'll add in overlapping instances just for the heck of it. Some of these are great. Some of these are not so great. In the language here, there's a bunch of different languages. And one of my intentions is that even for Haskell, our very best functional programming languages suffer from basically decades worth of accretion. They had a core that was designed a really long time ago. And individual features were designed, but they were piled on one after another after another after another until we end up with the GAC situation, which is 70 different extensions that all take the language in these random different ways. It wasn't designed altogether as a coherent whole. And that's something I'm very interested in because I know there's been a bunch of development. We've actually made progress in understanding type theory and learning about functional programming. We've written programs, we've written programs that in some cases are hundreds of thousands of lines of Haskell code. And so we have a lot of experience. Academic research has pushed state-of-the-art, so I'm very much interested in what would it look like if we set out to design a functional programming language from the ground up. Just throughout all these subjects. I think some of the features in languages like Haskell contaminate our minds. And we can't even see them anymore because we just take them for granted. We're so used to it that you get indoctrinated into FB. And then suddenly all you can see, you view the world through the lens that is ML and Haskell or whatever your functional programming language of choice is. So what about Idris? Well Idris is a modern functional programming language and was designed sort of with the benefit of a lot of the research that's been done in type theory. Unfortunately Idris is a research language and it's more a test bed for experimenting new ideas. It's not really designed at this point anyway to be a language I can take down into my day job and write production systems with. And nor am I convinced that the Idris approach as much as I like it is going to be the next big thing. Actually I quite think that it won't be but I could be wrong and I would love to be proven wrong. So what is there? I honestly don't know. There's nothing out there that strikes me as capable of being the next great functional programming language which is perhaps why we're all using Haskell or OCaml or Scala or LISP or scheme or a Racket. We're all using these languages that go back decades and decades. So we're going to be talking about my IDL functional programming. Let me talk about the stuff I throw out. And I'm going to do this one at a time and I'm going to do it in such a way that hopefully most of you will agree with me in the beginning and not heckle me and towards the end all of you will be throwing tomato then. Pattern matching, I don't know. I really don't want pattern matching. And the reason for that is just another ad hoc thing thrown into a language that doesn't need it. In fact, we can do pattern matching, pattern matching in quotes entirely with other mechanisms in a language like Haskell. An example is lenses and prisms and type classes. And those features allow us to implement something that is as powerful, actually more powerful than pattern matching because it's insensible, it's general and it's implemented in terms of other features of the language. And that's an example of using the Haskell total or totally package as well as some prisms that you should look up. Amazing stuff and it proves we don't really need pattern matching to get even pattern matching like syntax. So let's chop that one over the window and instead adopt like first class pattern matching which would be pattern matching implemented with other constructs of the language. Come in. Yes. I don't know if you wanted to leave it up for later, but how could the compiler check coverage of case? It's possible. We'll talk after. So records, I'm going to say let's throw out records because like Andy pointed out, record is basically a function from an identifier to some other value. And in fact, we can implement records in any dependently typed variable or in any dependently typed programming language and even in a pseudo-dependently typed programming language like Scala, we can implement records as first class citizens basically as partial functions from strings to values. And we can do that in a totally typed state way that allows us to extend records and delete fields, manipulate them, smash them together, do stuff that is totally impossible with all the records in Haskell and PureScript and languages like that. And we can do it because these are first class constructs that only require use of dependently typed programming language. And you're going to complain about the syntax while I have more to say on that. But fundamentally, you could make this as pretty as you wanted. You could use any syntax if you wanted. If you wanted to give it syntax, it doesn't need to be this hideous, bulky thing. But what I want to argue for is there should be no such thing as a record. It's just sort of a partial function from an identifier or a string or whatever you want to call it to about modules. I am actually not a big fan of modules. And so I'm going to toss them out of my ideal functional programming way. This is a module in SNL. And I've actually yet to find a case where I needed or could benefit from a module. And in this particular example, I take a stack and I make a generic definition of a stack. This is using PureScript. I make a generic definition of a stack that can create new things, push things and top things on a stack and do that completely independently of the representation of a stack. And I can pass this around. In this example, I'm using PureScript because PureScript has records and Haskell doesn't have records. But again, if you're working with a dependently typed language, then these records would be implemented in terms of other features of the language. So here's my generic function, do some stack stuff, which all it requires is a stack module. And it can do a whole bunch of stuff and end up producing some result using that generic stack. And you say, well, that's not quite fair because you can do other things in modules that you can't do with this sort of polymorphic encoding. And maybe that's true, but I've yet to find an example of, a compelling example of something that I couldn't emulate using other features of the language. In this example, I have a reservation system which requires a stack module. It doesn't care what sort of stack module. And it's going to end up returning some sort of generic module for a reservation system. So I've expressed a dependency from one module to another, and I've done it using other mechanisms of the language. In fact, I've never seen a compelling example of a single thing in which modules were like real first-class modules for the superior choice for implementing some functionality. So in order to get that, you need higher-ranked types, though. That's correct. And the reason you need that is because you're scolomizing an existential. Whereas a module would have made it first-class. Yes. So, again, I don't see the need for something like a module when I can implement it in terms of ranking types. Syntax. So this is where hopefully I'll get more than Daniel huckling me. I'm going to argue that we cruel away syntax entirely. And so I'm going to stop pretending the programs are strings of ASCII characters because they aren't. And almost every single feature of today's languages are workarounds for the fact that our programs are strings of ASCII characters. And that includes implicit and the order of functions parameters, which is crazy, and compiler errors, and fun-loving symbols, and scopes, and modules. That is Haskell-style non-modules, and imports, and name clashes, and holes, and a whole host of other language features that consume so much of what it means for a language to be familiar to us. They're all based on the assumption that programs are strings of ASCII characters. Wait, wait a minute. Hecker, what are you talking about? We're going to talk about some of the things I want at the end, so we'll get to that. OK. I actually don't want to ask this question. I want somebody that knows SML better than me. Does anybody want to make the point about generativity for the modules part? Nobody? I will. How do you force it on your experience between differently instantiated versions of a module? You're going to get that with existential. Are you? OK. What you won't get is the non-generative ones. That's what sometimes people want. Oh, I was going to point that out. Use existentials that give you generativity. OK. And you can get existentials from breakups. I'll have a version of that, and this won't get you back. The reason is that in the vocammal, for example, it uses modules to lay the terms of files to load. You don't tell the compiler what files are in the program. It uses module identifiers to figure out what to load. So if you take out the modules as part of something that the type system knows about, then you're going to have to supply the capability literally to the compiler to load a file some other way. But you don't have files, because you don't have text. So it's great. So good. So good. That was kind of what I was going to say. So with respect to the ASCII point, how do you answer the objection that human language for millennia has trended from pictographs to ideograms to alphabets? I mean, ASCII is literally the foundation of millions of years of linguistic evolution. Congratulations. I'm assuming that programs are ideas that are communicated. Yes, but ideas don't change the language. So fundamentally, the idea of this throwing away ASCII or separating it from ASCII unless you have a compelling way to teach this as a language because regardless of whether that we agree that it's ASCII, a string of ASCII characters, whatever we write is a language. Like, we're using a language in a visual environment in which you construct things by a method, yes, trees, a method very similar to if you see demos of, like, a unison, for example, what you called to use on them is working on something of that order where you construct programs sort of at a high level and you fill in holes and possibilities. You have holes on that list. You have holes on that list. No, different type of holes. So partially, I would like to end my argument for this. I wasn't sure. It's partial. It's not a damn quote. Sorry. Come at some typies. So hopefully I'm going to get a number of people objecting to this one. You don't want names for things? Well, we can't name things if we don't have characters, right? There we have a type of polynomial. Types called positive and negative and address size. And they differ in one respect. The names. And if I take away the names... Well... If I take away the names, what does this stuff even mean? Less. How do you do units of measure? How do you have any sort of type safety whatsoever in that? How do you not cry every night? Units of measure have laws that affect their composition. They compose into other things in units. And that's the means by which I want to capture. Yes. Yes. So what I want to do is stop pretending that differences in names actually matter. When you see code like this, you should see string and you should see float. And the fact that I call one an email and I want to DOM, it doesn't really make a big difference. Now I'm not arguing not to use new types and data and new names in Haskell today. I'm arguing that fundamentally in some future language we're not going to have to do that. Yes. We're going to have to do a half page as like a hundred and some odd lines of new types so you're banned from using it. I hope you don't need to use Elasticsearch because that's your only way. I love new types. I love new types. I use them all the time. So you like not a name-a-type? No, I'm arguing that this is the compromise we have to deal with. We're not arguing in favor of this. Imagine that a type declaration for emails came with a rematch which recognizes a valid email and that was checked automatically as part of the type. And now I've restricted to even creating that red-acted and so we have smart constructions built into the language. Smart constructions, that's how we work around this problem now. But if they were built into the language it would be at the tight level. But then how would you refer to one of those... You could still call it an email but it wouldn't be a new type of restriction anymore. It would be something that had guarantees in the language. You just have to write the reggags every time you want to use the reggags. So let me actually play a guessing game. I've taken two of those types from the preceding slide and I've done some liquid haxical type stuff off to the side. What do you think that first one is? Positive. What do you think that second one is? Negative. Damn right. And you did that. Yes, strictly negative. Without having to look at a name. Yes. I was going to say float and string are just names for different representations of the bytes. So at some point you have to have a name. Hey, we're back to girdle numbers. I don't need to help you remember what the properties are. That's the only reason. These help you remember what the properties are. But fundamentally, float shouldn't have any specialty in a patient's mind. But I would be really annoyed if every time I was reading and referring to a positive number I had to personally look at greater than this. So the point isn't in my visual programming environment. I can look at something and it's called an email and if I don't know what it is then I can go to that alias, whatever it is and I can find out what it means to be an email. And fundamentally the difference is about laws and properties and grand properties that are not about the need. But that's just like something to a definition and I'm truly heckling right now. So if I jump to the definition of this email what am I going to find? I'm going to find a string. But if I jump to the definition, let's say I have something called positive over here and I jump to its definition what am I going to find? I'm going to find that property. In whatever language you were just talking about though because like certain languages the definition of email could capture that pretty succinctly. Anything it does then that's awesome. Yes, that's right. So I think another place this goes to is asomorphic as possible, right? Because I may define a type that is email that has some sort of set of laws and you know, Greg may define a type that has some sort of set of laws and if the laws are identical then those types should be considered asomorphic, right? It doesn't matter what that's called. See also my argument gets that's a combination of the types I'm talking about. Right, we have many numbers that are all isomorphic to each other in terms of set theoretically. They have the same you know, you know, level of infinity of inhabitants but you know I can take a pair of numbers and put one over the other and that's irrational and I can put them next to each other and it's a complex and they're the same representation they have the same number of inhabitants but they have different meanings to the operations assigned to them. So if you talk only about the properties of the inhabitants that's not sufficient. You have to talk about things and their operations together. Yes? So let's say that I have a program with a last name type and also a city type. These are both springs you can't distinguish between them on patterns. Houston is a valid thing for both. How do I keep myself as a programmer from accidentally mixing them up? Providence. What? Providence. So, there's no one. Providence is true. I am an interpreter. God knows it. My program is tracking all the way from the beginning whether these things come from a database or something else to the program it's agnostic. Is it a city or is it a first name or is it a last name? Like, what matters we don't need names for these things what we need is a program that allows us or rather a programming language that allows us to capture the fact that these came from different places so capture the lineage of that where did that came from and then propagate that in a type safe fashion throughout my entire program. So my program I look at this stuff and it prevents me from mixing an email in a city but not because one is called an email in a city but because those things are different because they have different provenance that goes all the way back to where it came from. Are you saying provenance or provenance? Provenance. I'm sorry. What if you have some common ontology so you could actually like process a scientific paper and produce a program model like some people do that for your job like you read a paper and you try to understand that you saw it on the box like a quick build they built like a DSL so they couldn't map this easier in the area but there are still reasons to ask what if you have some kind of like things to map the ontology instead of that I think there are different ways of getting fundamentally into my own ontology we'll run into sort of these problems and I would prefer the distinctions between types to be captured by formal properties rather than names that's my only point. It looks like this has come to fit in there. It could be I have to dig more into it. Yes. Are you okay with names being attached to metadata? In a form of providence. So for providence that makes sense if your data is coming from some external source but like in the case of Bloodhound it's just a DSL you construct values but it's elastic search so everything's like string number string string string string there's no problems so how do I prevent transposition errors? Do you want me to like make a graph with types of elastic search first we kill elastic search? Yeah, you know, that's good. So first we kill elastic search? What's that? You're generating data to set it up so you use main fields in your non-records to get the Oh my gosh. So for example in an independent programming language one way you can annotate your data with metadata is just using a string or whatever however you represent the strings. What the strings are named. And that becomes part of its size. Yes, yes. So I think the vast I don't think you need anything special to model that but I think the vast majority of cases can be captured using either a notion of providence where you're going to come from or a notion of properties or other properties that this thing has to do with. Can I convince you to make a church encoding of the library? I'd like to see that. In Haskell for reasons we have new types and you should use new types and you're using Haskell or Peerscript you should use data. So all the stuff that I'm talking about doesn't apply to current languages. Data. All right. So basically you just hate humanity. Everything. Yeah, I've got this. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. Making fun. attack union. You just have this little tag thing that tells you which different branch of your union contains valid data. And of course, we've improved many, many, many times upon that. But fundamentally, that cone is you actually would only have one boy pointer because the tag would tell you whether it was the left or the right. Fair enough. Fair enough. Yes, so but in any case, what is either except describing to me how I'm going to lay stuff out in the memory. But I'm not ready to, I'll take your heckling in a second. Two slides and you're going to heckle a little bit. So I created one of these data types. And what's the first thing that we do when we learn functional programming? We start pattern managing. Yes. Yes, boo. Because pattern managing is super fun when you start programming functional. It's like, wow. I didn't know this was possible. So you pattern manage all the fricking time. And you pattern manage so often and then you eventually run into sort of problems with that. I didn't cover all my cases or duplicating code in this case, in this case, in this case. And then eventually, you get the bright idea to go pro and start describing your code in terms of catamorphisms or some other type of morph, some fold over the data structure. And so you create this heavy function called fold and you discover how powerful it is and you start using that in a little pattern management. And you still have to use pattern matching to implement your definition of fold. But after you've got this definition of fold, now it's just mega powerful and composes with other fold and has all these other cool properties. Well, what if we just skip that intermediate step and went directly to defining our data structure in terms of a fold? Well, that's called church encoding, as most of you probably know. And it's a version of church encoding. And there, I've constructed a list that is nothing more than a fold. And once I have this definition of a list, I can construct nil and cons. And I can do all the sorts of same things that I would be able to do if I were pattern matching against bits and laid out in memory. One second. And you have to ask yourself, at the foreign boundary, if I'm passing one of these lists into a function or I'm reading it or I'm getting it from some external source, how is it represented? Could be an array. Could be a linked list. It could be a vector. It could be skip lists. It could be any one of a number of different things. What I've done here is I've moved from bits in memory layout to a capability-based description of what a fold is. This description of a list tells me what it looks like when it's laid out in memory. And this description of a list tells me what I can do with it, what capabilities that gives me, and in fact, I can fold it. So yes? Towards the end, when you showed the functions with nil and concept, that sort of approaches a visual representation of the concept of, like I'm going to say either or maybe or something like that. But I would argue that the thing that you said described the bits also describes the decision more visually than this. I see two functions. I see something on the right-hand side that doesn't help me conceptualize the different cases. That's just familiar, right? Familiar, I mean, a tooling. So we're talking about syntax here. This is how I represented it in this ASCII string of characters, which I've argued that, yes. That's a familiar area. Yeah, but I'm just saying that I include probably a quarter of my data like this, rather than the data. He's like just holding it down more than the data. So it is sort of, I don't know, kind of more physical. And by the way, I know this is strictly more powerful than the list I defined at the point down there at the bottom. Yes. So basically what you're saying is you want an appendix type collaborative calculus with the Brown notation? I don't know, maybe. Yeah, I mean, you're talking about version coding. Everything. That already exists. Deleting everything in that language. Nobody's stopping you. What's wrong with knowing how your bits are laid out in memory? That's just a presentation detail. If you want a different representation. Yeah, but how your program performs might be. So I am unconcerned with syntax. So if you wanted syntax, which I don't, you could have some syntax or describing data structures as this or this. But fundamentally, what I want that to be is nothing more than a function. Yes. So what do you do with stuff that's only meant to mean something to a human? Images, audio, that kind of stuff. What do you mean in more way? Well, if we're going to think our data is simply a function, so there are things that would normally lay out as bits in memory that are only meaningful. We can do that for a human. So we can do that by defining a byte over there. And then defining a list of bytes. OK, so you define something in terms of this. It's a problem. Yes. I think that this could make an incredible core language for your thing. So if you look at interest, you can look at ee, which is the core language. And it's basically what you're saying. But with patterns that are church and culture, you have your language. But I would argue that that language is completely difficult to learn and to understand. Even if you have visual things, this is not a list, conceptually. So this is strictly more powerful. And you can generate it from the description of the list. Why could you expose this? So I don't get why you want to go. For me, this looks like going to low level functional programming. So this is like assembly of programming instead of high level. So I'm fundamentally concerned. I don't care if there's some higher level interface, whether it's a data declaration or whatever it is. Fundamentally, I'm concerned that that thing, whatever it is, is like, if you want syntax, and you're arguing basically for syntax. If you want syntax, it should merely be syntax for a function. Yes, but what I'm arguing is exactly the language is the syntax that you give, not the core of implementation. So what I'm saying is, for me, it's very nice. It's a way to implement your language. And I'm sure this gives you a lot of interpretability niceness. But this doesn't solve the problem on how the next functional programming language is going to look like. Because it looks like. It's not going to look like that, right? It looks like. Is that going to come back? That's not going to look like this. It's not going to look like this. I don't know exactly what it's going to look like. So if it is going to be like this, it's going to look like this. I'm going to say it's outsourced. I don't know what it is. You want to say syntax? I understand. I understand. It's an official thing. But I tried a couple of theories. I get to drop through a school mode, and I get to change the ASV. And I find it so complicated to use. So composability is basically related to the regularity of shapes. And if your things are very regular, if you only have one thing, then it maximizes your proposability. Even though, like TT, as you point out, actually, TT doesn't have this representation. But theoretically, in some programming language, like Haskell could have this core representation. But fundamentally, if it exposes that in a different way, as a different kind in the host language, then it will stop composing with all the other things. Because you have this kind of stuff and stuff composes in there and this kind of stuff and composes in there. I'm very, very interested in the property of universal composability. Yeah. So I wonder, is there a little bit of baby semantic trickery going on here? Maybe. Maybe. So right now you're saying you've got but I do have more slides to get to. Oh, you're just trying to think for this one slide. What's that? We're ready at six times. There's no one who just stops you from going over. So if you've got something that has names, and those names have to be representative of how they're laid out in memory, they don't have to be. And then you sign those names capabilities. But what you're saying, you get a list of capabilities and at some point you have to reference it by something so that you can give it a list of happy characters that's listed or you can give it a picture of a list. But something that becomes a language that's referencing this thing it just seems like a reversal of how we define things but that could be done with the implementation. It's so weird the way we do this because we do define this data structure and if you've been functionally programming for a while almost the first thing you do is create a fold over it or use some generic mechanism for doing that for you. And then you oftentimes interact with the fold more than the other. I look at this type definition and it tells me everything I need to know to use this thing. It tells me exactly what I can do with it. And I don't need to know anything about how those bits are laid on memory whether it's array or link list. I want to respond to the thing he said actually. This might not be the thing John's thinking but you mentioned this idea that what John has identified is almost more like functional assembly than something you can actually use or use it. You'd have to build something on top of it and you'd only be interacting with that thing you built on top because John's thing is too basic. And at that point it's still useful to have built it on top of this expressive substrate. That's actually something I was trying to do with Emily. I was emphasizing on the C expression that you can do everything with application. But the thing that's interesting to me is I have to build this entire language on top of the application but I couldn't imagine building a separate different language on top of it that could still potentially talk to the first one because they're both basically based on applications and they both have enough type knowledge to maybe convert between the two representations. I don't think you want the same thing all the time. I don't think two program boys want the same thing and I think that if you argue that I'm going to ask you for a Haskell program on how many DSLs you've written. But there is a thing like in Haskell you're building the DSLs on top of something pretty complicated. DSLs are on top of Haskell and when I see people design Haskell DSLs it's the form of the fact that it has to fit inside of Haskell syntax. But instead Haskell was built on this one simple functional ASM and your DSL was built on this functional ASM and the two together because that's a charming substrat. You just know something very interesting. As it transpires Haskell is built on such a functional core. It's called System FC our F omega C and it's pretty well-studied and there's a number of papers about it so all those extensions desugar to a relatively small core and SPJ has said that's the only reason he believes it's okay to have this many extensions is because that small functional core acts as sort of a consistency check on all the insane incoherent things you might want to do on time. I will point out all the functional languages represented at this conference I think there's only one of them that doesn't have anything resembling a core that he struggles to. That language starts with an S ends with an A They're working on it. They're working on that. How are we going to work on it? And then you go and see how their it's all the same! And then it's all the same. It's not just FC's It's just bigger ones. Oh my God! You want to know? It does. If they tell me a program they won't tell me if it's a program. It doesn't terminate. It's done on a personal basis. Remember, you don't say a word. Definitely terminate in the college. No! Recursion is like the go-to of functional programming. It can lead you to construct monstrously complicated things. They're really hard to understand. Selling pitchforks over here. It's okay. There's always money in the band of standards. Buy a farm. And then there's life. So yeah, it's just so powerful that when you add recursion to a language, our compilers can't understand it anymore. And you can't prove whether or not it will terminate. And even people can't understand it anymore. So full on general recursion, it's such a powerful culture to be refused. I think it's too powerful for a functional programming language. In the sense that in my ideal functional programming language, I wouldn't have to try to understand anything like that. Because I don't want to have to basically trace this amount of state in my brain. Because I'm probably going to get it wrong. And I did write this thing. And honestly, you have no idea it terminates. And it's only three lines. And it's not that many expressions. Do you know what it terminates? You never invoke anything. So I have a question. Let's say something like changing from a visual art presentation. That's a fundamental change that you need to do to pass the next level. Something like this, the language itself could support general recursion. And you could have some concepts, a macro that restricts a subset of the language. I don't see why you'd want to bake in the language like you can't do general recursion. Because in some cases you might want to do it. Why not just have the normal constraint of the language so that you can't do it. But you can always sort of get out of it. So the idea of language boasts is one alternative. One of the things to think about is that you're talking about baking into the language in a constraint that you can't do general recursion. All of our languages bake it in a constraint that you can do general recursion. And that is actually a profoundly weird thing. Really profoundly weird. That's also what I'm saying. I just want to be able to do both. So recursions used to do things in induction. You have this big old thing, or a number, or whatever it is. You break it down into smaller and smaller bits and you express your algorithm that way. That's pretty easy, right? Because what you can do is if you define your data structures using church encoding, then the act of creating that data whatever it was fills up the energy. If you want to make it like potential energy in physics necessary to deconstruct. So for example, using that definition of fold for a list and the definition of cons I gave you. If you use that to do a cons, b cons, c cons, d cons now, you end up building in the capacity of that data structure to go basically repeat something four times or whatever it is that you said. So induction on data structures, expressing algorithms on data structures using induction is easy. You don't need general recursion to do it with your church encoding or your data structures. It's just fold, fold, fold, fold. I'm thinking about asking it, but I can't answer it. So I don't know. Okay. So what about co-induction? And this is the really sort of tricky part. Because co-induction is cannot, it's like you're describing something that could potentially be infinite. And of course Haskell makes it super easy to do that anywhere because everything is not as strict as you should. But if you don't have recursion, then you end up defining a state machine instead. And it's basically an unfold. And this is your definition. Things like this are not powerful enough to represent arbitrary co-induction. But if you increase the power of your type system and it would have to be dependently typed, such that you can do things like session types and like the type that you get out can be different than the type that you get in, you can provably accept that same type through the next path, then you can end up expressing what this amounts to, which is a description of a program that can be run externally and potentially continue on forever. And independently typed are in total functional programming languages. That's sort of how you have to model infinite processes anyway. So adding the further restriction that you can't do recursion just seems relatively minor on top of that. And the advantage to something like this is I can look at this and I can understand. This, even if it creates this infinite thing, I can look at any definition of this function and I can pretty simply understand it. And yes, it's going to make some things harder. It's going to make certain compositions, certain algorithms harder to describe. But I can describe it like this. And once I've done that, like the complexity for me, the mental complexity is going to be bound. Whereas going back to this example here, which this thing might be co-inductive, or I don't know, I'm just going to let you know it doesn't terminate. So I improved it. They're all pretty specific. So, all right, I'm going to ask it. Do you want to restrict this language to, do you want this language restricted to all many programs that can be proven to terminate? Or not true? Well, co-induction means that you can write a program that is co-inductive, so it can go on forever, but if you're being productive, you always have to do something with it. But then without general recursion, the language would be provable, like it'd be provable any program without a terminate or not a terminate. So using induction, you know that it will terminate. Using co-induction, you know that you're always making progress. So that the function you're running over the stream, potentially infinite stream, is doing something. But you're total on like a piece of work basis. So what is my ideal functional programming language look like? It'll only take five minutes. Does your ideal functional programming language look like another programmer that you can tell what to program? So it's later like in a year, I want to subtly come to New York. Okay. And I honestly think that we don't need Terry completely to do 95, maybe even 99, 99% of programming. It's just like all the business apps I write and like anything written using reactive, yes, yes. All sorts of, all sorts of programs. You can't prove that, Brian. Give me an example of a program I cannot write without Terry. I'll write it. A compiler for your language. JavaScript interpreter. I have an idea for the onions. We can declare which part of the onion we're using. We'll call them pragmas. We'll put them at the top of the file. So, SubTerry complete your part. And you can improve and optimize a whole lot more. It's super rich. It's an amazing thing with this SubTerry complete part. You can write the vast majority of your programming. And then there's sort of this other part where you lose the ability to improve things. Your optimizations don't become as powerful as abstraction has a cost. But you can like drive that pure SubTerry part by feeding the inputs or like doing all the other sorts of messy things. It's not like the IOMONAT. Except like the IOMONAT is enormous. It's the partiality. Yeah, the partiality. If you have code data, then you can do that. So that's all I wanted. And I almost wanted to be like painful to use that driver part of the language. Because I want people to be forced to write it. I think that's true to be easy. Because everything is going to be finished. I want to stress your attitude. You have to be visual. And you've seen sort of demos of what this could look like. I really think this is the future of functional programming. And I think it has the capacity to make functional programming truly friendly. And one of the things that I see is like people will write blah blah blah blah blah blah blah and then the compiler will beat you up on the head for doing something that will allow you to type. And of course it does that because it has no control over what you put into a file. But fundamentally, it doesn't have to be that way. We can construct, whatever sort of programming environment this is we can only allow the construction of things which are type safe and eliminate compiler errors. All right. Vimmer, Emacs, keep on. Seriously, John, have you looked at Unison? I have. Yeah, because Unison does exactly that. I'm a huge man, probably. And that's one of my inspirations at the end. Yes. I would argue that still it's nice to be able to write something that is incomplete. So if you only allow... Holes. No, but snub holes is like I comment this line. And this complete... I thought about that a lot. Like I comment this line on this thing. I thought about that a lot. And I agree it's useful, but I think this is one of the things that comes down to I'm used to programming with strings and characters and I know how to incrementally solve a problem doing that and sometimes it's by commenting stuff out and sometimes it's by writing lots of broken stuff because that gives me the shape of the big picture but I think that I would become accustomed to using something like Unison except the next generation or maybe five generations away from whatever Unison will be. I think it will form its own sort of idioms that allow you to do similar things and have conceptual holes in here and have sort of a big picture of ideas in your head. It's going to be represented differently but it will enable people to come to the language and not have to deal with all these compilation errors that happen when we allow people to enter arbitrary strings of text and we try to pretend that there's something more than... I have to agree with that. In fact, I think that to some extent that happens for people who use IDDs. It's like you're programming Java then. You tell them today, create, get, etc. You're not typing that. You're going to the token. You think that's correct. Directly. And that's, I think that's the part. You send the ASCII and you say I want to construct it. I want to get it in the center. But even with ASCII, if you use IDD, you're not going to do that because you have to either get it in the center or you're not going to do that. I never get a compiler error and I'm using IntelliJ to give words to you before I even fix it. So they just didn't let me fix it even if it's still red. I would never get a compiler error. I mean, this could create tremendous problems for somebody to either learning the language or learning something in a language because if they're not allowed to pose an incorrect construction and they're told why it was wrong then how do they learn what the right thing is The years of experience the sort of structured program editing that we have if you know of a Diesp sch determinant you consider maps to be a programming language but you have hundreds or thousands of years of experience with that so we know what these people look like I don't know what these people look like but this is a structured environment and it teases them to learn incrementally. So an editing environment something like this which is what you wouldn't expect So I apologize for my way over, I'm going to have a fast forward view of this, and I want to like to visualize it, and then make the chat happen soon. So a structured editor destroys the motivation for almost all of my credentials. So many languages are already implemented, so there's a lot of select syntax that people want this, and this, and this, and this. And they exist because we edit strings that ask for characters. A lot of us teachers don't even use the class system. So I want everything to be functions, and I want these to be math functions. I'm tired of programming languages where we have things that are not values that cannot be used in the domain and co-domain of functions. And of course that means we need to be technically typed because we need functions that can take types and values, and return types for values. And I think truly reaching this goal would eliminate a lot of the incidental complexity that comes with programming, just because there are so many irregularly shaped things. Like fundamentally, so many constructs in the functional programming language work around the fact that we don't have truly powerful math-like functions in which everything in the language can be part of the domain or co-domain. I want proof search, so I think that proof search is the future of functional programming. And the reasons for that are many. Computer proofs are being used to generate so massive databases of information that humans cannot prove by themselves. A lot of proofs constructed by mathematician for some theorems are now at least computer assistants, and the amount of information necessary to prove some of these things is huge. So I want proof search. I want it to be terrain complete because if it's not the computer trying possibly in vain to prove something, it's going to be me. Like if a given problem is in fact terrain complete, I would rather put that work off to the compiler. I want levels of proof, something that's proven true or there's evidence for it, i.e., you know, quick textile evidence, but it's not proven true. So I can choose to possibly run programs that aren't necessarily proven correct, but haven't been proven incorrect. And I want, of course, proven false. I want proof databases to exist and to be persistent so that when I recompile it, I don't have to regenerate all the proofs that I've gone through before, which is what every single programming language gets wrong, every single dependently typed programming language gets wrong that I'm aware of. And I want people to actually take advantage of research, for example, in machine learning and learning to accelerate the searching through these huge graphs to reach certain locations. And zero cost abstraction, and actually some of that is free from picking out recursion and using version coding with that. You can actually figure out what the program means fully expand and reduce it again to an online support. And you have that ability because there's not a lot of general recursion in recursion coding. These are my inspirations. So, unison, I fully approve of that project called Go Summer Liquid Haskell and over five Gabriel Gonzalez Cool Stuff. And I don't claim to have full answers. I'm probably 50% wrong on all of this, at least, which means I'm maybe 50% right on the other hand. So thank you very much for putting up with that.