 Okay, good morning. I'm going to tell you a little bit about smart contracts. My name's Phil Wadler. I'm a professor at the University of Edinburgh and I consult very proudly for IOHK. And Lisbon is clearly a beautiful city and the right place for me to be. So I work with something called Lambda Calculus and that's the very first programming language. It was developed by Alonzo Church in the 1930s for basically philosophical questions. They wanted to put all mathematicians out of work by having a method you could follow that would solve any mathematical question. And then it turned out that was impossible and that's why Lambda Calculus was invented. You had to codify what it meant to solve problems in order to show no, there was no way of solving this problem. So that's why Lambda Calculus was around. Turns out it's the very first programming language that beat out Turing machines by just a couple of years. Turing was Alonzo Church's student and he wrote a paper showing that Turing machines and Lambda Calculus were equivalent and the very first line of that paper says, well, the very first paragraph says, and this means we can use the more elegant Lambda Expressions. But that got lost, right? Most people do not use functional programming languages, languages based on Lambda Calculus, but Charles gave a very inspiring speech on Monday morning saying, we're going to succeed doing things that nobody else does, like using peer review. Okay, it's really weird that peer review, the thing that's been used for 400 years, you can go into industry and say that nobody does this, but that's exactly right. And hey, look, we can just do what everybody else does and beat everybody else at their game just by using standard techniques. That's an amazing point. And as you point out, if we succeed, other people will follow. So that's why I'm really excited because it's very important that the software here work correctly for this reason IOHK has bought into using Haskell, a functional programming language, and this really has the potential to change the way we're doing things. So another thing we're doing is writing a smart contract language, what's gonna happen with that? There are lots of them out there, right? There's Ethereum, the EVM, which is not functional at all. And yes, everybody's laughing, right? Because not functional in that context means you get to lose tens of millions of dollars every month or two. Their new language is coming along, like simplicity, which really is very simple, and Mickelson, which are taking ideas from the functional world. And we have our own language, oh, and thank you to Bruce Milligan for those photos. We have our own language, Plutus, which then compiles down to our equivalent of the EVM. So just as Solidity compiles to the EVM, we have Plutus, which compiles to Plutus Core. And then we're also working on a different language, Iella, which we might compile things like Solidity into, for those people that want to continue to lose tens of millions of dollars every few months. I mean, I really see that's the core constituency that Iella is supporting. I can't see what else it is for. So what does Plutus look like? How many people here know Haskell? Right, so it looks a lot like Haskell. There's a very big difference. Notice that we use one colon instead of two for HasType. And the base type is not int, which is a 32 or 64-bit int, but an integer, which means you use as many words as you need to represent that, and that will turn out to be interesting later on. So here's the standard factorial program, the dullest thing you could possibly do. And then we compile that into Plutus Core by writing lots of parentheses, just like with Lisp. So it's not intended that anybody should look at this, but for debugging purposes, it's readable enough that you can actually read it and debug it. But you should never look at it unless you're debugging a compiler, basically. You should always be looking at this, not at this. And then there's some details to work out here, but this is basically it. So we just use Lisp-like syntax to make it very easy to parse. By the way, if there are any questions people should just ask, are there any questions yet? It'll be time at the end, but especially if you don't understand something, right? It is your duty to ask a question if you don't understand, because yes. Plutus is a new language, correct? Why do you need a new language when there seem to be lots out there? That's a good point. So the obvious thing to do might say, let's just use Haskell. Haskell's quite a big language. It's never had a formal specification. If you actually wanted to formally validate something in Haskell, there are a large number of steps to take. So we've got a much simpler language that makes it much easier to get accurate. We're not tied into Haskell, which actually has a relatively small base of people that are maintaining the core software. We also have an even smaller group of people, but at least we're in control of our small group of people. So those are the main reasons for not using an existing language like Haskell, which is that it's quite large. And we want something smaller that we can formally specify. Oddly, even though Haskell benefits a lot from the formal methods community, and we always formally specify the core of any idea in Haskell, we've not specified all of Haskell. We did it once. It was really boring. Okay, and then here's what Iyela looks like. So Iyela is more or less an assembly language inspired by the EVM and even more by LLVM. So this is just an assembly language sort of thing. For an assembly language, it's pretty readable, but this is an assembly language, and of course it's imperative. We're now assigning to locations rather than just computing values. So there is no notion of storage at all here. It's just you're computing a value. And especially for something like a smart contract, that's the way to go. It's much easier to reason about something like that than about assigning to locations. If you're assigning to locations, you have to go, well, wait, what's in the location that's relative to what I'm doing? In a functional language, you just depend on your arguments, which makes it completely obvious what your answer depends on. One might even say it becomes painfully obvious, but when you're doing something like a smart contract, which has to be right, then going to extra lengths to make it clear what's going on makes a lot of sense. There is a threat here, right, which is there are relatively few people in the world trained in functional programming, and we want everybody to use Cardano. So there's some claim question about, well, do we have to make available to people only what's best for them to use, which will be Plutus, or what they're familiar with, which might be compiling solidity to Iella. Okay, so there's Iella. And one thing to keep in mind, right, is that in computing, we always think, oh, can I make it efficient? And so people immediately look and go, oh yeah, let's do Iella. That's low-level machine language that will be efficient. I've said this to Gregora, who's in charge of the K team that's doing Iella, Gregora Rosso, and he said, well, no, we don't care about efficiency, which is true because they have this formal specification language called K, and they're going to execute Iella just by using the K specification. So that guarantees that you're doing what you specified, and it also guarantees that you'll be slower by some factor, maybe 10, maybe five, at least in the first instance, until you get very sophisticated compilers. Of course, once you have a very sophisticated compiler, you'll be faster, but you'll have a less strong guarantee that you're doing what you intended. But that should be fine. Just like using just an interpreter for Plutus should be fine. And the reason why is if you have a smart contract, almost all your time is going to be spent executing the cryptographic primitives. And those you could use a verified version in C. They're verifying formally all the cryptographic primitives in C, so you can use that and have high assurance. And so the cryptographic primitives will go fast, and if everything else goes slow, it shouldn't matter. Oh, good, do we have numbers to back this up? No, we do not. And I've been talking as the Iella team ramps up from eight to 19 people. I've been saying, well, one of the things we really have to allocate some of these people to is let's get some numbers. And let's check that we're right, that we don't need to worry about efficiency. If the Iella team confirms, as we expect, that efficiency doesn't matter, then we can go back and say, well, wait, do we need to Iella that? Okay, so any premature optimization is bad, right? We know this because Donald Knuth, my old teacher told us this many years ago, right, premature optimization is the root of all evil. A quote from Donald Knuth, carefully set up on the web as a quote, and we know it because Tony Hoare, my old boss, told us exactly the same thing. Premature optimization is the root of all evil in programming, and we know it because it is the core of many of the funniest comic strips in XKCD. Can you pass the salt? Pause. I said, I know I'm developing a system to pass you arbitrary condiments. It's been 20 minutes. It'll save time in the long run. It's called Iella. So, right, let's look at the resources we're putting in right now. So, right, all credit to Gregore, right? He's an academic like me, but he's got his own company, RV, and he's been very good at ramping things up quickly, which the Plutus team has not. So that's fantastic. So they've got eight people now, they're gonna ramp up to 19, and on Plutus we have 1.2. Daryl is the one, that's Daryl, and I'm the 0.2. So, we're talking with Eileen and Gerard and so on about what we do about that, but I think it's important to address that issue. And it's most important, I think, just because having put 19 people into Iella, we're gonna go, well, we've invested a lot of resources there, that's clearly the way to go, because we put so much resources. It would be a shame if, just because we're investing resource, that's the route we take, because it's not clear to me that that's actually the best route for achieving our goals. Okay, now, having told you how wonderful Plutus is, I'm gonna tell you how much I don't know about it. Even though we've been working on the formal spec of Plutus course since last March, Daryl and I have been refining this, there are still some key issues where I think there are some real questions. And I wanted to spend a fair bit of this talk talking about where there are questions because it's you people that are gonna provide the feedback that help us get the answers to these right. So that's why I'm exposing these three issues. So this part, a bit of the talk, is gonna be a bit more technical. If you know Haskell, I hope you'll be able to follow it, except maybe for the last thing, where I get very woo-hoo. But the reason I'm doing this is these are hard questions. The best way to get a hard question right is to expose it to lots of eyeballs and lots of people that can help you think about it. So that's what I'm doing here. So the first, as I mentioned, is we have unbounded integers. So this is a really cool idea. I first encountered it in Erlang. So Erlang is something that was originally developed by Joe Armstrong and others at Ericsson for writing phone switches, right? A phone switch is something you want to be reliable, right? You don't want your phone call to go away halfway through. Everybody who uses Skype knows that. And Erlang has unbounded integers. And the reason for that is, by the way, it's also designed to be highly concurrent and to use message passing, which is a very good idea for building concurrent systems, much better than the shared memory that's often used. And of course, message passing works for distributed systems, which a phone switch often is. So it has unbounded integers, and that means say you write a really successful phone switch and it's up for a long time. It's up for so long that you've got some counter that overflows the word size. Oops, that crashes your phone switch. No, not in Erlang because the integers are unbounded. You can't overflow the word size. It might get slightly slower as you go from one word to two word integers, but it will not crash. So that's great. So previously you'd have an overflow and a crash, and now it works fine. So what a brilliant design! Let's do that! Okay, but there's a problem. If you have a smart contract, one of the things you need to do is charge a cost called gas. How many people are familiar with gas from Ethereum and the EVM? Right, quite a few of you. So Charles didn't raise his hand for some reason. He's not participating. So you have this concept called gas, and that says, we'll charge for each operation. And the idea is if you didn't do that, somebody could send you a transaction which does lots of very expensive computation, and all of your time will be used up doing that very expensive computation. And this could be a kind of denial-of-the-surface attack. You're a mean miner, and the mean miner sends out all these very expensive transactions to everybody else, and the mean miner ignores them and therefore gets the blocks out faster. So you need some way of saying, no, we're going to actually charge you for the computation involved. Because if you don't do that, then there are various kinds of attacks you'd be open to. So we had unbounded integers for a long time, and then one day in the shower, taking showers is really important to doing good research, right? And one day in the shower I go, wait a minute, because our model was just like everybody else's, right? You have a certain constant cost for an addition, a certain constant cost for a multiplication. Wait, if somebody used really, really big integers, they could make multiplications and expenses, and sorry, multiplication and addition, really expensive, and we'd get in trouble. So your cost after reflect reality, your cost must be proportional to the actual work you're doing. So that means if we have unbounded integers, for addition, we have to charge a cost proportional to the largest of the logarithms of those two numbers that we're adding. For multiplication, it must be proportional to the sum of the logarithms of the two numbers we're adding, right? The logarithms are there because it takes, what is it, 20 bits to represent a million, and 30 bits to represent a trillion. So the log of the size of the number is the number of bits that you need, log base two, in fact. So we need to do something about this. So okay, well let's just change the cost, fine? We will allocate cost proportional to the logarithm of the size of the numbers we're acting on. And let's see, it's actually a pretty important problem, to work out how much gas you need. For one thing, there's real monetary cost connected to the gas, and for another thing, you say in advance how much you're willing to pay, so there's a bound on the cost of what you're doing. Good, that means if you accidentally put a non-terminating loop in your program, it doesn't run forever, and you don't get charged in infinitely much. So having a bound on gas cost is important. It would be nice to calculate such things. As I learned after joining IOHK, because IOHK is so good at nosing out the relevant academic work, there's a bit of work on something called RAML, Resource Aware ML. I actually knew of this line of work because a very important strand of it started at the University of Edinburgh and was carried on by Martin Hoffman, and then Jan Hoffman, who I think is no relation, carried it on further and built this great tool, Resource Aware ML, and Resource Aware ML actually is really good. It can compute a good cost bound for your program, but as we'll see, it struggles a bit if we use unbounded integers, integers that might occupy more than one word, because I said, oh, how are we gonna model the cost? It's never even said, let's just use RAML, and in RAML you could say model an integer by a list of bounded integers, and then the cost would be proportional to the length of that list. RAML can do that, I said, okay, that might work. Let's try it. So here's RAML with one word integers. I've just written a little program. It's just like factorial, but I'm adding the numbers up rather than multiplying them. So it just takes the list of numbers from one to n and adds them together, and because RAML's a little weird, I had to represent the count as a natural number represented with this type here, NAT. So if you're a Haskell programmer that will look familiar as an insanely bad way of representing the natural numbers, well, insanely expensive, because it's linear in size, and if you're not, don't worry about it, but this is just basically standard adding up the list of numbers from one to n, and that should take time proportional to the number you give it, and that indeed is what says. So it says it takes nine steps always plus 26 steps times m, where m is the number of s nodes of the argument. In other words, how many s's we have in writing down this number, which is just the size of the number. So that makes sense. Adding up the numbers from one to n takes proportional to n steps. If addition is constant, that's right. Okay, what if it's not constant? What should it be then? Well, each step would be log of the size. The biggest number is n. So n times log n is our bound. Okay, so let's do what I said about coding up addition. So there's addition all coded up for you where it says dot, dot, dot. I've omitted it. It's about three or four pages of RAML code that Yan kindly wrote for us. And you can do this, and that goes through. It says, oh yeah, it's cost is 21 plus 125.33 times m plus 80 m squared plus 26.67 m cubed, which is a lot bigger than m times log m. So it's way overestimating here. Okay, we're getting really bad estimates here, but at least it can estimate it. So then I said, okay, let's try factorial. Well, we multiply instead of add. And whoops, and then it said, oops, a bound for factorial could not be derived. The linear program is infeasible, which is technical speak for oh shit. So it very quickly fails even on something as simple as factorial. So what does that mean? Where have we put ourselves at? So say you deploy a successful smart contract that runs for a long time and the counter passes the word size. So previously we'd get an overflow exception, but now we get out of gas. It still fails. So what we're doing is we've made it really hard to analyze the programs, but they're still going to fail. Is this a good trade-off? That's not entirely clear. So there's something really cool about saying, we can do it, let's use unbounded integers, but we're actually paying a very high cost for it. So it's not clear if this is the right choice. And so that's why I wanted to expose it to this audience because you're very good people to get feedback on have we got this right. It'd be very easy for us to change. And of course it's only when putting together the slides I go, wait a minute, why did we choose this again? This really seems a bad choice in some ways. Oh, and of course the real nightmare I have is that there's going to be something really weird with gas that will be completely obvious once it happens and completely unanticipated before, just like the re-entrancy bug for the Dow and that will really cause our clients some problems. So that that's my real nightmare, is that we think we're doing something cool that's going to cause an unexpected problem. I've been trying to figure out what that unexpected problem is. Maybe there isn't one. As of today it remains the unexpected problem. And that's really why it's my nightmare, is thinking about what are we missing that could cause us trouble. Okay, second point, abstract data types. So every programming language should have abstract data types. An abstract data type lets you say, here's the name of the type, here are the operations on it, but I'm not telling you how it's implemented. This is really important in computing. First of all, to avoid premature optimization, because you can use the simplest, stupidest, easiest, most obviously correct implementation and plunk that in your program, and then later you go, premature optimization is the root of all evil. The second problem is it's time to optimize and you can't. So you want to structure your program in such a way that you can change the representation and it will still do exactly the same thing. That's what abstract data types are for. They are a brilliant idea. Only really developed over the last 30 or 40 years, long after stored program computers first came along. It took us a while to think, oh, we need this. The basic idea of the maths of it were only formalized in the 1970s twice. Once by the logician, Jean-Yves Girard, who wanted to prove things about second order logic, and once by the computer scientist, John Reynolds, who just said, oh, abstract data types are really important. And then at the same time, work was going on by Barbara Liskov and many others, and Barbara Liskov actually got the Turing Award for it. John Reynolds did not, a great injustice. And he won't because one of the requirements for winning the Turing Award is you have to breathe. So anyhow, abstract data types, really important idea. Did Plutus core have it? Did Plutus have it? No, because Plutus originally didn't have modules and Plutus was like Haskell, and in Haskell, abstract data types are tied to modules. The representation is hidden inside the module. Kind of makes sense, that's what modules are for. So the first thing I did when I joined the team is I said, let's add modules so we get abstract data types. And so of course we did it like you do it in Haskell. So the way you do it in Haskell is you say, here's my module, it defines a stack, and stack has five operations. You can create an empty stack, here it is. You can check whether a stack is empty. So we've decided it's hidden inside that we will represent a stack as a list of integers. So we can check if it's empty, is that list empty? You can push something onto the stack. By conzing it on, colon is the con's operation for Haskell. You can pop something off the stack. So you look at the thing on top of the stack and return it, sorry, that's the new stack. And then top is actually the element on the top of the stack. See the code is so simple, I'm actually, having written it several years ago, I can read it out and explain it to you. Now notice what we do is we introduce a new constructor make stack for these things. And notice we don't export this constructor, make stack. And that's the key, since you haven't exported the constructor, nobody can create a stack except by using these. And since constructors are also used with pattern matching to deconstruct things, to take them apart, nobody else can take it apart and look inside. So you have complete control over the operations on stack, which will be just these five. Stack, of course, is a really dull example, but it's traditional that you always use exactly this particular really dull example for abstract data types, this is tradition. Tradition, okay, and so notice that this is fine. All we need to do is make stack everywhere. We're taking something apart, we write make stack, everywhere we're putting a new stack together, we write make stack. And by doing that, we've explained which things should be viewed as a type stack and which things should be viewed as a type list of integer. So this is how we do it in Haskell. Why did we do it in Haskell in this way? Okay, I can now tell you the answer, right? Because I was one of the developers, I was there, I was one of the people who pushed for this. Why did I push for this? The answer is, I was stupid. We already knew of a better way of doing it, which I will show you in a moment, but we didn't know if that better way would work with something in Haskell called type classes. Many years later, I did a project with a PhD student, Jeremy Yalop, which worked out how, what worked out in fact, that the way Miranda does it, which I think now is better, and this way of doing it are equivalent. You convert between the two. So we could have specified it the other way, and if we needed to, implement it this way. But in fact, I believe we could just do type classes much more straightforwardly without doing this at all. But at the time, we didn't know that. So we were conservative, and we took the other design, and of course, that's become very popular. But what we could have done is done the way it's done in Miranda, which is like this, which is you use the types to keep things straight. So we say empty has type stack. Is empty takes a stack to a bool? Push takes a number and a stack and returns a new stack with that number on the top. Pop takes a stack, removes the top element, and returns a new stack, and top takes the number off the top of non-empty stack. So these two both fail if you give them an empty stack. And then we just specify, we say stack is a list of numbers. Those bracket means list of numbers. Empty is the empty list. Is empty just checks whether it's an empty list, which is the null function in Miranda and in Haskell. Push just calls us something on, pop takes something off, and top, sorry, pop removes the top element, and top returns the top element. And these declarations tell you which things are treated as type stack and which are type list. In this case, they're all treated as type stack. But we could have treated one as type list if we want to and just indicated it here. So even within the last year, I've done research published in a feshrift to David Miranda, sorry, to David Turner, the creator of Miranda, explaining how you can make this work even without a type system. You can just do dynamic checking to ensure that this works out. So we now have a pretty thorough explanation of how to do this. And if you just look at, this is a lot easier to read than this, even for the simplest possible example. And when you're doing things that are not the simplest possible example, then the advantage of not doing it this way instead doing it this way is much more clear. So what should we do? Right now we're doing it like this, which is more familiar to our potential user base because lots of people know Haskell, but I think we should do it like this. I think we should do something that's easier to read and write and also exploits the foundations that we're using in Plutus Core. So Plutus Core uses a variant of the Lambda Calculus, the one that I mentioned, that's due to Jean-Yves Girard and Jean Reynolds called System F or the second order Lambda Calculus or the polymorphic Lambda Calculus, if you want a big word. Polymorphic is just what we use for the kinds of type systems you find in languages like Haskell. I was once on Greece, I was on vacation and walking down the road and there are all these vendors and there's a vendor who had a sign. Polymorphic earrings. I thought, ah, this is my thing. They just meant they had lots of shapes. So every time I think of polymorphism, I think of polymorphic earrings. Right, so again, a choice. Do we do what's more familiar to our user base? And for instance, Duncan has said that's kind of important or should we do something that's easier to read and write and closer to the mathematical foundations that we're using? So I'm beginning to lean in this direction but it's really important to get feedback from this group. So I said I'd show you three issues, that's two issues. For the third issue, okay, I'm going to talk a little bit about data constructors. This is more technical. I'm going to rely on a little bit of knowledge about heavy mathematics. So every talk should have something in it that about three people in the audience will follow. This is that bit. So if you get a little bit lost here, that's okay. And if you think that I'm doing something that looks kind of insane, that's okay too because that's what I'm doing. So here's the problem. So the first part of this I hope people will follow. So what is our model for a smart contract in the settlement layer? So our model is much less powerful than what you have in something like Ethereum. And one of the things the smart contracts group needs to sort out is the scope of the power of our smart contracts. And is it right that the settlement layer be more restrictive than the full power of Ethereum? But that's where we're at right now and that might well be a good choice. So what you do is when you submit a transaction you give a validator which is a program of type A to B for some A and some B. So A and B are arbitrary types. Comp in Plutus is for those of you who know it's a monad but what comp means is it's something of type comp B, a function of this type, well it could return a value of type B but it can do some other things. It can fail, it can say nope that's no good and it can also examine the world. So it can say a value of type comp B, one of the things you can do of comp B is say, wait tell me the current hash of the current block and then I'll give you the answer. So this can expect the real world. Okay and then the redeemer provides an A so it's type this comp A and there's a standard way of sticking these two things together to get a value of type comp B. So one person provides the validator and a different person then provides the redeemer. So the validator might be something like multi-sig so A says give me a bunch of signatures and then it will either fail or return okay depending on whether or not things are properly signed and then the redeemer is responsible for providing the signatures. Is that clear? Yeah? Ah, good question right, could be change the world, sorry could comp change the world, I presume is what you're asking. You can have a comp that changes the world but we don't. We only read the world, the only change would come as a result of what you do as a result of doing the transaction. Good question, do people then follow this model? Okay, so we can pick A and B to be anything we want. Let's say the validator wants to create a new abstract data type for use by the redeemer. Can we do that? Yeah, so what we do is let A now, so I've expanded A out to this type, so this is what the redeemer is gonna provide and then the validator will give one of these, sorry the redeemer will give it one of these and actually what happens is after the redeemer gives it one of these the validator to use this must give it an X and give it an A of X and then that will return a B of X will get a comp C. Let's see how to use this to say provide a stack. So the validator wants to declare an abstract type of stack and the redeemer wants to use it, how would we do that? Okay, so the validator says okay, for all X, X is now gonna be stack, I'm gonna give you a stack, that's the empty stack, here it is, right, so I'm gonna give you an implementation of the type stack, so here stack is just an arbitrary name, we don't know what it is. Here stack is the abstract data type that we declared before, so I probably should have called this stack placeholder or something like that, because I should have used up more words on the screen. So we'll give it the implementation of empty, the implementation of is empty, the implementation of push, the implementation of pop, the implementation of top and then given all these things, it's going to return something type comp B of X, so B might return a stack or something containing a stack and then we'll look at all that and we'll finally get our final answer comp of C and we'll be done. No C shouldn't refer to stack because you don't know what stack is, you couldn't do anything with it, stack could be anything, but in here we then give it stack and empty and is empty and push and pop and so on and then we do stuff to actually get the answer. So this will answer type will be comp of B of X and then we can do stuff here to get a comp of C as our final answer. Is that clear? That's the hardest thing I expect you to follow. Now I'm gonna get insane. So this is great. We've got full abstraction. If we want to, right, and we could provide other things on top of this to make it easier for people to use so you don't actually have to put, whoops, all these arguments in here. We could compile to this. But this is great. You could do anything you want this way. Well, so I was very pleased when we worked this out and then Daryl said, but what about constructors? And I responded with a very technical phrase. I said, oh shit. Because we can do these declarations, right? We saw this type before. The NAT type is either zero or successor of a NAT. So this is the, this is how they define natural numbers back in the 1800s, Dedekind and Piano and others. And it's pretty cool, right? You can write down how many of you spent years learning your plus tables, right? You can write down the complete definition of plus in two lines, right? What does it mean to add zero to something? Well, you just get the something. What does it mean to add one plus M to N? Well, it means recursively add M and N and then add one to the result. I'm writing a textbook right now where I use Agna to do things like write down this definition and then prove the normal things like it's commutative and associative and the other properties you want for addition. But these things are different than functions. Lambda calculus is all about functions. These are something else. These are called inductive data type constructors and you can use them in patterns like here. We can say, well, look at my first argument. Is it zero? Is it a successor? This is deconstructing the argument looking to see what it's made of. So normally we would do that with accessor functions but here we're doing it with patterns. So we need a whole new model. This is a different way of spelling oh shit. What are we gonna do? So the standard solution is you make your programming language a lot more complicated. So Plutus core has data constructors in it and so we could use this model but if you were using data constructors then it would be hard to pass those on for the validated to declare new data constructors and pass those on to the redeemer. So you could pass on an abstract data type but you couldn't pass on a concrete data type when more you knew the representation. Seems unpleasant. So how can we get around it? So it turns out when Alonzo Church created the Lambda calculus, here was the big question. So the first question is it just has functions. How do you represent numbers? And he could represent numbers with something called church numerals which right, we'll see here at the church encoding. So how are we gonna do it? So now Nat is gonna be just a type. He didn't have types, this came along later in the 1970s with Gerard and Reynolds. So we've got two functions, zero, which is a natural number and successor, which given a natural number returns a new natural number and then just one other thing, n case, which, okay, this, is this right? Yeah, this is just the definition of a case expression. Okay, you give it a natural number and then your case expression has two branches. One, what do I do if it's zero? And one which says what do I do if it's not zero? And we've actually seen an example of this on the previous page, right? So this is a case expression in disguise. It says look at the first argument. If it's zero, return the function that accepts n and just returns it. If it's the successor of m, then, sorry. And we've just accepted it as an argument. We accept m, we accept n as arguments. We do a case expression. If it's zero, what do we return n? If it's m, what do we return? Well, sorry, if it's the successor of m, what do we do? So we'll bind m. We now know the thing we are one greater than. And then just add m and n and return the successor of that. So these are the two branches of the case. And in general, our case here returns a nat, but in general, your case might return an arbitrary type. So case has to work with an arbitrary type. So let's call that arbitrary type x. If it's zero, this is the answer. If it's not zero, we do something weird and we say, okay, recursively compute the answer for that value using this case. So this is now what's called a fold function. I'm getting more technical. Recursively compute the answer. Do whatever this says to do to give you back an x. And then your final answer, either it's zero or it's the successor. So either way, we'll get back an x and we return that. So now how do you do plus? You do an n case on m. If it's zero, just return n. If it's non-zero, recursively invoke plus. That's what this does. And then take the successor of the answer. And then how do you define these? Now they're all just functions, right? So zero takes a type x and a value z of type x and a value s of type x to x and it returns an x. And so zero of x and z and s returns, that's a typo. That's z, which has type x. And successor of x and z and s and n returns successor of n applied recursively to all the same things. And then in n case, oh, we've just defined these things to be n cases. So here's an example of using the apps type to convert something. The conversion does nothing at all except change the type, which is very important. So we're representing numbers as their case statements, but a funny case statement that has recursion built in. Brief history of computing. So, church knew all this. He said, look, we can represent numbers. We can do addition. We can do multiplication the same way. It's great. And so he said, how do we do minus one? And he said, oh. And they thought about it really hard and they couldn't come up with an answer. Eventually, church's student, Stephen Cleeney, went to the dentist. The dentist put him under with laughing gas. While he was under with laughing gas, Cleeney figured out how to do minus one this way. It turns out it's really hard. It's also really expensive. Finding the predecessor of the number n takes proportional to n steps. So you'd think it's constant. It's an s field, deconstruct it, look inside. It should be constant time. No, it's time proportional to n. So it's not efficient, but it works. And this is how church proved that you could not put all mathematicians out of work. That there were always problems that could not be solved by a computer. The particular problem involved, of course, was the halting problem. So you can do it all this way. We don't need constructors at all, but it's inefficient. Bad idea. Uh-oh, is there anything we can do about that? Yeah, it turns out there's something else called the Scott encoding. And I was reminded of this because last September, I went to the International Conference on Functional Programming. They always have a workshop called Farm, which is about functional art. R, what is R? Oh, art, art. So that's the A and the R. And music, functional art and music, or farm. And they'd always do a concert. And one guy at the concert did, I kid you not, live coding with something that turns lambda expressions into music and visual displays of the trees involved. And he coded up the church numerals, he coded up the Scott numerals, these, and he proved they were equivalent. Well, he tested they were equivalent, showed they were equivalent for the first 10 numbers. So there's this art performance that reminded me that these things exist. So Dana Scott came up with a different encoding in the 1970s. And now, oh look, NK's looks, I hate using that pointer. I'll use this pointer. N case looks like what we'd expect. It takes a natural number for all x. The zero case is x. The other case is you deconstruct it. So you get the previous natural number, and then you return an x, and then you return x. So now, that would be that type. Zero is, right, just return zero. Successor of N in s and z is apply the successor to N. So now we've got constant time finding the predecessor. N case just returns it, and now what does plus look like? Oh, it looks kind of like what we'd want, right? You do a case on M, if it's zero return N, if it's not bind M to the predecessor, recursively call plus, and then take the successor. So all we need is recursion built into our type. Recursion is an operator on types, and recursion as an operator on functions. We can define recursive functions and recursive types, we'd kind of like to do anyway. So with that, I give you a new proposal for Plutus Core, the insane proposal for Plutus Core, which is very tiny because it has no modules because we can do them with what we've got. It has no data constructors or deconstructors because we can do them with what we've got. Is it efficient enough? Is doing it this way a lot less efficient thing using constructors? We have to measure that. That's to be done. Daryl is doing it right now, I hope, at this moment. But here it is, right? Your complete definition. You've got kinds, which are the types of types. You've got types, which are just type variables, functions, universal quantifiers, recursion, or built-in primitive types like integer. And then terms, you just have variables, land expressions, land application. Big landers, this is abstracting over a type, applying to a type, recursive, recursion def, blah, blah, blah, recursive definitions and primitive values like plus or three. Sorry? Napkin Plutus, good name. In fact, I think calling it Plutus Core is a bad idea. We keep saying Plutus Core. We need one name for it. Maybe we should call it Napkin. I like that. That's a better name even than simplicity. I'm done. Do you have opinions about programming languages? If so, we need your help, right? I just want to leave you with the conclusion I always leave people with. When you have a tough job to do, what you should think is that this is a job for lambda calculus.