 So, what I want to talk about here is what I'm calling finally MTL, I want to introduce this idea called a finally tagless encoding of DSLs, I'm going to talk about that pretty briefly and then I'm going to talk about how MTL, which is called the Monad Transformer Library in Haskell is actually an example of, a really widespread example of using finally tagless DSLs and I think it's not often recognized that those are actually pretty much the same technique and that finally tagless DSLs are actually fairly popular and used in a lot of different places. And so, I'll dive right in. So, this one's a little bit faster, hopefully. Tail of three DSLs, this one's one that a lot of people might be familiar with, called Hutton's razor sometimes, it's the simplest DSL that anyone can imagine I guess. It's the adding language, the adding language represents integers and represents the addition of integers. And so, we have two constructors, we can inject integers into the adding language and then we can take two elements of the adding language and say that we want to add them together. We can build an interpreter for the ad language. This just takes any value of the ad language and turns it into the integer that we believe it should represent, interpreting it as an integer. And to do this, we just have to pull the integer out of the literal constructor if we have one of those. And if we have an adding constructor, then we have to interpret both sides of it to get the integers that those represent and then add them together. Pretty simple, here's an expression of the ad language. Hopefully, this is nothing too complicated, especially since you just sat through the last one. Another language you might use is Hutton's backup razor, which is the exact same language, but instead of adding now, we're going to multiply. So, literally these two languages are extraordinarily similar. But you might want both of them, and in fact you might be programming something that doesn't necessarily have the ad and the multiply language, but maybe other two other DSLs which you want to work in. Now we have Hutton's travel kit, which he goes traveling with, which is the ad language and the multiplication language glued together somehow. In particular, we can do that very simply by building a DSL, which simply has a constructor for both the adding operation and the multiplying operation. Our interpreter just has to handle the adding operation and the multiplying operation, and then our expressions can be constructed using either addition or multiplication, which is pretty nice. Unfortunately, anyone looking at this probably is cringing because I just spent all this time defining this language and all the time defining this language, and then gluing them together, I repeated basically everything all over again. So, we have a standard problem of we can't necessarily just compose these languages together, and so we end up repeating ourselves a lot. So, we can do things like the technique of pattern functions, which was talked about at 10 o'clock today, to try to fix this problem. So, I'm just going to run really quickly through the operation of splitting those languages into pieces and then being able to reassemble them nicely. So, this is the same language we had before, the ring language language. What we're going to do, oh, I'm just, sorry, there we go. What we'd like to do is we'd like to take ad language and multiplication language and stick them together somehow to form ring language. I got messed up on my slides. So, let's try it again. So, we introduced the fixed point of types, which we saw an hour ago. Hopefully everyone's familiar with this one. Given any fixed point of types, we can use this fixed-fold operation in order to turn an algebra on that functor and extend that algebra into the fixed point very similar to what we were doing in the last talk. These are a little bit annoying because we want to show these things. We're going to need undecidable instances, but whatever. Then we create two pattern functors, one for the addition language, one for the multiplication language. These are pretty much identical to the type constructors that existed when we wrote the ASTs out concretely, but they have an X, a type parameter, instead of actually recursively declaring them recursively. And then if we hit those with fix, we get the ad line and the multiplication language just back the way we had them before. We can write the expressions. These are the same expressions that I exemplified before. They're a little bit annoying because I have to throw that fixed constructor in there again, but no big deal. Then we can add some smart constructors if we really want. We can completely ignore the fact that we had to do this fixed thing. All right, great. Let's throw some smart constructors and run off the end of my slide. We can have the interpreter for the ad language. We can have the interpreter for the multiplication language. These are pretty much the exact same code which was in the interpreters of the original languages. Just now they're operating on functors instead of the data types directly. And we can create the interpreters of those two languages. All I'm demonstrating here is that by doing this fixed point recursion schemes surgery we can take the exact same DSLs that we were working with before but express them as a functor smashed into a fixed point of functors. Then if we introduce this guy, the functor sum which allows us to turn any two functors into another functor which is either the first functor or the second functor depending on which one we care about. We can create the eliminator, the fold sum which does the exact same thing we're doing back here. This is just fixed fold, but instead of operating on a single functor it operates on... Sorry, this is just taking an algebra for each functor and sticking them together into the combined algebra so it's a pretty straightforward thing to do. And then we can build the ring language as the fixed point of the sum of the add functor and the multiplication functor. So by doing the surgery we were able to pull out the common pieces, stick them all into the fixed point machinery and then have the specific pieces to the addition steps and the specific pieces to the multiplication steps and stick them together nicely. And we end up with our ring language. We had to build a bunch of more smart constructors and ultimately it works. So cool, we use Haskell. We use all sorts of really cool type machinery and we managed to pull off a composable DSL. We had to write about 40 new lines, use three more pragmas, but we got to use fix and anger, so... win. Truly that's a really cool thing. I don't want to say anything bad about this process. It was talked about again at 10 o'clock if you guys saw that one. It's from data types a la carte. It's a really interesting technique for kind of solving the expression problem and I don't want to denigrate it at all. I just want to talk about a different way of solving this problem. So if that was data types a la carte and I want to talk about data types a la caret, Kislav and Shan. So let's try this again. Exact same game as before, but instead of representing all of our types concretely as these very obvious ASTs, I'm going to just talk about what it means to add things together and what it means to multiply things together. And so hopefully you can see that add here has a lot of similarities to... if I scroll back a bunch of slides unfortunately, to add lang. That add constructor right there takes an add lang and an add lang and gives you an add lang back and all the way down here, this add method of the adds type class does a similar thing for any kind of type V. We can do another one for multiplies. We can instantiate these with really obvious types like integer and then we can create little expressions which are integer languages. We can add one and three together and we can multiply one and three together and hopefully this looks completely trivial and boring and dumb. But we also... over here when we put one and three in, those are forced to be integers because we only can do that. But if we add this new class from integer which also gives us the ability to inject integers in in the same way that the literal constructors of our ASTs before allowed us to inject integers into the ASTs, then we can write something like this. This is effectively the same thing that we're writing here before just now that we've noted that the integers have to be expressed, injected into our language explicitly. We can get a very different kind of type which is really what the focus here is. The type is for any V which instantiates both addition and the ability to inject integers into it. This little expression of language is one of those Vs. We can do the same thing with the multiplication expression. For any V which has both multiplication and the ability to inject integers, this expression is of that type. If we specialize V to integer like we were showing before, we can get back these exact same expressions. But we have something more general here. We're asking of any type which instantiates at least these two interfaces, at least these two classes, then our expression here is exactly what's being expressed. We can even now, unlike before where we had to do all of this fixed point surgery and creating the sum of functors, we can actually just add these two languages together by just listing out all of the constraints all at once. I can call a new set of constraints called rings of V which just demands that we both can convert things from integers. We can add them and we can multiply them. Now, this rings expression is exactly the same as the ring expression from when I was using the concrete DSL, but it's being expressed without any extra fixed point machine without any surgery or anything like that. Just very directly by stating we just need addition, multiplication and the ability to get integers into it. Is integer the most specific type you need or could you use naturals or even booleans? You could inject. In the same way when you're building a DSL you can inject whatever type you want. You might have to build a constructor for booleans and a constructor for naturals and a constructor for integers. In this case, I've only asked for the capability to inject integers because that's all I need, but I could create a new class called in from naturals and have the i type here be maybe called n and say natural into our type. And then if I, instead of having this constructor saying from integer add and multiply it was from integer from natural add and multiply and then I would be able to inject naturals in the same way that I'm injecting integers. So it's just a design choice really. So this rings expression if we put it into interpreter immediately pops up 15, but that just has to do with the defaulting rules. It's defaulting to guess that we wanted an integer for v. But if we instantiate something else like ringlang and we note that there's an exact correspondence between the literal constructor from ringlang, the addition constructor from ringlang and the multiplication constructor from ringlang for v to be specialized to ringlang instead of integer and it will give us our DSL exactly like we hope. And so what I really want to indicate here is that there's an exact correspondence between this type here the from integer adds and multiplies all of those constraints together and that DSL that we were working with before the same constructors are there and if we want to take an expression of this type rings of v any v that instantiates rings and we can see that as being we can instantiate v as our concrete DSL our concrete DSLast and get that back immediately. And so this technique that I've been talking about is a really, really simplified form of the technique of finally tagless DSLs which are introduced and popularized by Coret and Kislyov and Shan. And the advantages that we have is that we are composing just by adding constraints together. We're creating a big list of constraints back here and if we want to add new constraints such as the introduction of naturals like you were talking about all we have to do is just add another constraint to the list so the constraints just stack up like that. We don't have to explicitly talk about them being on the left side or the right side we don't have to create new machinery for fixed points and we don't have to create new machinery for pattern functors. We can get all of that just by expressing exactly what we want. We want some type which has the ability to add the ability to multiply and the ability to accept integers. And so we don't have to mention the concrete left and rights is where the tag list comes from but if we do actually have one of those ASTs we can show that that AST instantiates exactly what we're talking about we can directly interpret our demands for injection of integers addition and multiplication as the constructors of that AST and get it back directly. And so there's a correspondence between these two things and I kind of want to suggest that anything that you would do with a DSL in the very concrete AST format you can do with this finally tag list style as well by just gluing together different demands that you have of the language there. I don't want to say isomorphic but they're related to one another and such that you can get the same job done. And so the question is does it work at scale? Does this keep building? Can you keep gluing constraints together until the end? And my quick answer is yes because we have the monotransformer library and I really want to talk about here is that the monotransformer library which is sometimes people complain about it because it uses a lot of prolog or type classes is an exact instantiation of a finally tag list pattern. We have a similar kind of type up here where for any monad we don't necessarily know what that one is as long as it instantiates this monad state interface then we can construct a operation that fits that interface because we're using get and set which just demand that whatever monad we're in is a state monad. If we want to go and pull a concrete manifestation of that interface we can go to the transformers library and pull out the actual state monad which is an actual data type and just go ahead and claim that the M that we are talking about here is state and it will run concretely, it will run immediately. We can also I guess be more explicit here and talk about the transformer stack that exists, the state T and identity glued together. And so what MTL is giving you is the ability to just specify exactly what constraints you want, what are the capabilities, what are the effects that we need in order to write this computation when the person is writing a monadic value they don't have to determine exactly how these transformers are going to stack together, what is exactly the concrete representation of this effect type you just say what you need and then when you compile it the compiler will actually solve for you with the answer that you are demanding is and so if we have an operation which requires states and air and I O we could solve that as a free transformer over some kind of state and air functor if we wanted we can interpret it as the state transformers stacked on top of the either transformers stacked on top of I O we can interpret it in the other order all of those things are ways of instantiating the demands that the person who wrote op needed and so a lot of people don't necessarily like this, a lot of people have a little bit of complaints about the MTL, I find it really interesting, I find it really useful and practical but I have to go ahead and address the fact that it doesn't do everything you want one thing that people often talk about is the fact that if you look at the transformers library and you look at the instances for say state T there are effectively three this monad trans one gives you all that you need because it allows you to create your stacks really explicitly but when we go and do the same thing in the monad transformer library it runs off the page and this is because we have to actually tell the the prologue that's running your type plus is what are all the acceptable ways to combine effects what are all the acceptable ways to pass effects through one another and that's not a straightforward answer the way the effects combine is not so simple and so we actually have to declare all of the ways that they're going to combine so people talk about how the fact that if you have n classes and m concrete instantiations and you end up with the square of implementation work and this is more or less true but I want to talk about how the fact that the primary place for this is born like if you're just using mtl you don't have to worry about that too much the people who maintain mtl have to worry about actually instantiating all those n times m instances yes isn't that sort of just a fact of nature though that the effects compose in different ways it doesn't matter how you want to represent it you're still going to have to write that code somewhere absolutely yeah and I think the nice thing about this is when you have the need to write n times n instances the fact that some of those instances don't make any sense can be expressed by simply not having those instances but if you don't have the space to even talk about there are n times n possible instances then you have to just assume that they all exist but for users if you're using this in your own applications you don't have to worry about that too much because your own personal finally tagless classes are going to be maybe one or two of them with one or two concrete implementations and so n times m might be like four or six yeah so this is just what I was talking about there where we actually are allowed to talk about all of the predicates that actually exist about whether different compositions of effects are allowable or not and so by having n times m possible instances you could write you're allowed to talk about the ones that don't exist the other thing that I mentioned earlier really quickly just to kind of finish this off is that MTL forces you to forget about the ordering of your effects and that can actually be a big problem because for instance if you commute some effects you end up with actually totally different behaviors and so if you're writing something in this MTL style this finally tagless style you're kind of dependent on whoever maybe it's not you somebody else is going to use your values and interpret them you're dependent upon them not cheating and doing something wrong or misunderstanding what you wanted and that's I think merely a drawback of this kind of style when you're only using constraints and demands at the level of the type constraints then you don't have the ability to express ordering so ultimately this kind of comes down to convention and communication you have to state laws you have to talk about how those laws combine so you could talk about monad parser being a particular way of combining state and air and if someone instantiates monad parser there it's dependent upon them to do it right or you could even create a more concrete finally tagless style effect where you're saying exactly what each one of these things being and again the implementer of this type the interpreter of this type is required to know what it means and use it properly there's one more drawback here but it's definitely running close to the end of time so I just want to say when you're using a lot of explicit structure sometimes using the type class prologue in Haskell is really convenient allows you to talk about exactly what you want that you just need certain effects and need certain operations finally tagless style can be used to represent pretty much any IST you want and MTL is a pretty mature example of using finally tagless in anger and so if you want to explore this and think about it in that way take a look at MTL alright thanks everyone