 This talk is going back to really simple freshman calculus. Now I understand not everybody in computing took freshman calculus, but I'll assume you didn't come to this talk if you didn't have a vague idea what a power series was. And the point of the talk is how little it takes in Haskell to get a very long way into mathematics. And some of the reasons are on the slide here. Lazy evaluation lets you deal with infinite structures. It turns out that the head-tail notion of a list is just beautiful for some mathematical constructs better than subscripts. And it results in the Haskell looking like mathematics. Power series are just a sequence of coefficients. We're totally formal here. We're not interested in issues like convergence. And if the power series happens to end, we call it a polynomial. Here are on the screen a few useful polynomials, 0, 1, and there are two ways to represent a 0. In fact, there are an infinite number of ways to represent a 0. Simple monomial x is a 0, 1. The cosines, we all know the cosines series. It just turns into a few fractions. The simplest things are addition and negation, which are operations that belong to class num in Haskell. The red things are all that you need to read on the slide. To do addition, you add the heads of the sequences and recur on the tail. Negation, exactly the same. And in Haskell, subtraction comes for free. It's defined in terms of addition in the way shown at the upper half of this slide. The numerical class, class num, has a few standard functions attached to it. Add, subtract, multiply, negate. And here is a piece of the definition from the standard prelude. Negate is a function from some type to the same type, numeric type to the same type. Addition, multiplication and subtraction similarly. And in red, I say same type. This is critical. It is defined that you can only add two things of the same kind. And for multiplication, that makes a difference because you can't multiply a power series by a scalar. There we see a little Haskell showing on top of the mathematics. The same code at the bottom is code that was on the previous slide, how to do negation and addition. Only it's been put in the context of an instance declaration that says lists are numerics. Having lists being numerics, we're off to the races. We could have done a couple of those definitions in terms of standard Haskell functions, or we might have. Negation is simply a map of the negate function over a whole list. It's a temptation to say addition is zipping two lists together with adding term-wise. What's wrong with that? If the two lists are of different length. Exactly. Zipping only goes to the shorter one, so we had to give a different definition than zip for multiplication. Now it gets more interesting. Multiplication. The think of a power series in the upper red line, a power series f is a first term, lower case f, plus a tail, x times a tail, and the tail has a bar over it. With that as the unfolding that, we can find out what f times g is. There are three lines of derivation here. The second line of the derivation is simply unfolding f and g. The third line is noting that the last two terms in the second line can be factored. Factor x f bar out of them. Now we know how to do the head, fg. Well, I can't point to my screen, you won't see it. We add, we have to promote the constant term by putting it in brackets. We promote the constant term to a power series and multiply it by another power series. And finally we have the recurrence on the tail of multiplying the two tail series together. And multiplying by x is simply moving something one place down the sequence, so that the constant function performs multiplication by x. We get integer powers for free because they're defined in Haskell in terms of the product. And this is simply the same old formula for multiplication of two lists followed by how it appears in the freshman calculus book and which one looks easier. The derivation on the previous slide was far easier than the derivation by exchanging orders of summation. Division is a little nastier. If you do the same kind of derivation, it's a two-step process to come up with the red formula. But I'm leaving off the screen a whole lot of decoration. There are various ways in which you have to defend against zero divides more than one way. And we have, by the way, discovered long division in a very simple derivation. Long division is divide the two head terms, although we're writing polynomials with the least degree term first rather than the biggest one, but that's okay. Divide the head terms, multiply that term of the quotient by the divisor, and subtract from the dividend. That's exactly what that formula says, and it's a familiar old long division. The biggest flaw in that statement alone is that if you divide two polynomials and the result is exact, you get an infinite sequence of zeros. It's not too difficult to decorate this up with a test to see whether that's happening and stop. But this is the basic idea. Functional composition, f of g of x. The unfolding procedure goes again in two steps. But notice in that last line of the derivation, we have the head of f added to the head of g. And if we continue to unroll the tail of f, we will get the heads all of g added in there. And that's an infinite sum in order to compute the head of the result. That's not so good. So we restrict ourselves for g to begin with, to not have a constant term. And translating that formula, we get the one at the bottom with sharp taken as the composition operator. Reversion. This is the functional inverse. There's a great literature about reversion. Many, many people have produced, have written papers on algorithms for how to do reversion. Knuth's treatise refers to several of them. And his best treatment of this algorithm is about a half page of pseudocode. If you do the same kind of development, you come up with this extremely simple answer. Which is perhaps implicit in the formulations in the papers, but never made explicit. This is the best single line of code I ever wrote. But notice the feedback. The result, RS, is fed back into the formula. This is a wonderful thing about, and why does the feedback work? Well, because you've got one term to work on that leads zero. And that produces the next term, and the feedback does work. It takes delay to do feedback, and we've got it. Now a little bit of Haskellism. In Haskell, you know that the constants, numeric constants, are polymorphic. And they are in a formula, they are converted to the first type on a list, which makes the type of the expression work out. So, and how is it done? There is a from integer, class num has a from integer operation that will convert to, and you have to supply it for each type you'd like to convert to. You want to convert it to list. And this says that to convert an integer constant c into a power series, you first convert it into whatever the underlying type of the power series is, and then you put it in the list. Now, having done that, we have this x, which I showed you a formula for before. This is the polynomial zero one. And in red, I've said you've got to preserve polymorphism. Haskell would ordinarily decide, oh, this is a definition, I've got to give it a type. We don't want it to have a type yet, because it might end up being rational rather than integer. Now, if having done that, we take the formula one over one minus x, which as you recall is the sum of the geometric series down at the bottom of the page, take ten terms of it, and they're all ones. So here with little more instruction than you've given third grade on how to do arithmetic, we've gotten you to power series. Calculus. Derivative is real easy. All you have to do is multiply each term times its index, and shift right one because the constant term has derivative zero. And we happen to have the handy zip width and the handy enumeration notation in Haskell. Actually, there is a gotcha in this particular line of code if you're really into Haskell. Because we used an enumeration, we get enum in the type. And that's a contamination. Now Haskell has dealt with it in certain other places. If you use a fraction, one half starting from one half to enumerate one half, three halves, five halves, seven halves, the result is rational and enum. The notion that the rationals are enumerable is a little bit strange. So I could make these things, these power series enumerable, but I can avoid it by just spelling out the definition instead of using zip width. And division, I'm sorry, integration is the same thing only as a zip width divide and you shift in the opposite direction. Having derivative and integral, we can start playing games. Cosine and sine, we have the differential equations for them with initial conditions at the top. If we integrate those differential equations and substitute the initial condition for the constant term, we get those two formulas for cosine and sine. And that is a working program. Similarly for the exponential. Here's an application of differential equations. We wanted to take the exponential e to a function. That's the same if we, because of the chain rule and differentiation, we have the first formula and then solving the differential equation, turning the differential equation into an integral equation, the same as we did on the last slide, we get y equals e to the constant term plus an integral of the derivative times y. And that turns into this Askel program. Not a very pretty program. We can write it much prettier at the bottom. e to the f of x is just the functional composition. But the upper formula is a linear process and the lower formula is, if I remember correctly, cubic generating functions. Generating functions, if you have one in Haskell, the generating function is something you can execute. It really does generate. It's not just an abstract idea that a generating function generates. It really does. And this is the most complicated slide of the bunch. It does the generating function for binary trees. A binary tree has a root, two binary trees under it. If you know how many nodes or how many ways the right-hand tree of size 5 could have in it and how many nodes the left-hand side of size 2 has in it, 5 times 2 added to 1 is the number of trees of size 7 plus 1 is 8. There's the argument and you get this simple little red recurrence formula for the generating function. Generating function for the number of binary trees is 1 plus x times its square. And remember, x times is shifting right, 1, constant. So there's the formula. But why couldn't I use plus in this one the way I used plus in the exponential? Because this, in this case, the feedback isn't delayed. The t on the right is exactly the same t as the t on the left. It's not the tail of it. It hasn't had anything prepended to it. So if I wrote plus, this would be an infinite loop. I had to use cons. And this produces the Catalan numbers which enumerate binary trees or number of ways to parenthesize an expression within primitive symbols in it and so on. I've a lovely generating function example. Oh, not yet. Because lists are in class num and the definition of the way the instance declaration said for lists to be in class num, its underlying type has to be in class num. Well, that's recursive. So given I could have a power series with power series coefficients. Now the power series coefficients had better be polynomials because while you're generating the first term of the, it takes infinitely long to generate the first term of the outer series. But if they are polynomials, we're in good shape. And here's an application. Generating function for the binomial coefficients is 1 plus x to the nth power. Now, we plug that in to the geometric series which has x to every power. You plug that in as the coefficient. The coefficient will have the nth power. You plug that in and see what you get at the bottom. Pascal's triangle for almost free. We need to do a little bit more work. I've been showing you answers that were sort of lies. 1 over 1 minus x is a fraction. So the answers really came out fractional the way I had written the program. And I showed you the answers as integers. If I use it default, I'm sorry, the answers didn't come out as integers. They came out as floating point numbers by default in Haskell. If I change the default ordering of types for interpretation of constants to integer then rational and then double, I get nice rational arithmetic as I've been showing. And the other bit of polishing here is I can take the formula for Pascal, put it in just one line of code by simply substituting in the formula for 1 minus x as, I'm sorry, 1 plus x. 1 minus x, 1 minus x, sorry, there's the formula and there's 1 minus 0 1 which is x. Well, that's about enough examples. The notion is just to show how fast you can build and how nice it looks due to the fundamental mathematics behind, that Haskell is based on really nice mathematical ideas that carry over when you apply it to mathematics. A lazy evaluation, a head tail, compute the head and recur on the tail a pattern that should be drilled into every student and we have a list of contributory factors in the design of Pascal. All these formulas can be found on my website in this one very little page which turns there are only ten one liners to get all the way to Pascal's triangle. And that's all I have to say. Thank you.