 Well, thanks for coming out, y'all. I do appreciate it. Before we get started, I do want to say a big, massive, stinking thanks to Ruby Conf, all the organizers, and volunteers, and everybody that put this thing on. It's a ton of work, and I massively appreciate it. Also, on a personal note, I'd like to say thanks for the iron yard where I work for supporting me, and also to all y'all for coming out. You had a choice in where to be right now, and you're here, and I appreciate it. Today, we're gonna talk about composition. I hate talks that start with a, Webster's defines composition as, but I'm gonna do it. And here's why. In doing my background, I look through Wikipedia, and it struck me as very amusing that the first definition for object and functional composition are exactly the same, not exactly, but isomorphic, right? Except for not to be confused with the other thing. That struck me as amusing. So I'm gonna take a swing at defining this for myself. For me, then the thing I'm interested in this talk is, composition means combining pieces cohesively, it's the thing that we do as programmers all the time. We look at a complicated problem, we figure out how to chip away little bits, we find solutions for those little bits, and then we reassemble it into a cohesive whole, right? So that's the thing that I'm interested in trying to understand a little bit more is that assembly process. So at a high level, my goals for this talk are, I wanna explain what object and functional composition are. I'll spend a little bit more time on functional composition, kind of assuming that people are less familiar with that area. I also want to intentionally confuse the two a bit, right? I think a lot of times this object oriented versus functional is presented as an either or, like it's a choice, you can do it one way or the other, and implicitly usually like one's right and one's wrong. I don't believe that. I think these are more similar than they are different, and like people grasping an elephant in the dark, like we're just looking at the same sort of thing through different lenses, and I wanna explore that a little bit, no lens joke intended. Yeah, and then kind of along the way, I'm assuming people here are maybe a little bit less familiar with functional stuff, so I'd like to expose you to some phrases and terminology, some vocabulary that you might not have, or might be a little bit different if you wanna go and explore some more functional programming on your own. So before we jump into any of that, I wanna talk about one of my favorite subjects, me. So some background just to kind of explain where I'm coming from here. I've been doing Ruby for six or seven years now. I have been interested in composing little small things for a lot longer than that. Also, I just love that picture. But my interest in composition kind of seriously comes from, I ended up majoring in math, going to grad school, and doing my master's thesis on applications of monads to topology. Don't worry so much if you don't know what those mean, that's not terribly important. The point is, I spent a lot of time thinking about category theory, so like a quick summary of my life, there was Legos, and then there was topology, and then there's category theory, and then there's Haskell, and that gets you current, like we're here now. So I'm presenting everything kind of through that lens, right? To borrow a couple of phrases, I would say that I am Haskell infected to my functional core. Out of here, I'll see, how many people have looked at some Haskell in a cursory way? Okay, all right, decent number. How many people found it somewhat daunting? Yeah, me too, right? Like I have a degree in category theory and stuff's not easy. If you've looked at some Haskell, you may have seen things like this. Oh, it's easy, yeah, like you wanna write hello world, like let me explain to you what a moan ad is. I appreciate that, right? Legitimately, the reason I got into Haskell was because I was interested in category theory and the abstract and was graduating and needed a job and somebody told me stuff was useful, but you don't need this, right? If you hear anything like this, ignore it. I actually intended to present some slides that were like things that I learned from grad school and category theory that are important for this talk, but then I realized I'm probably gonna be short on time and decided to cut the whole thing, and I think that's a really good kind of like microcosm. You don't need it to understand the day-to-day work that you're doing. It's very useful if you have the time, which we don't, you should look into it more, but really like the only thing that I'll mention here is what a category is in case you're not familiar with it. So in math speak, a category is a bunch of objects, some functions between them, and the means to compose them. And that's it, right? Category theory is if this is all the data you have, what can you do? So really category theory is the study of composition. That's kind of the genesis of my interest here. I'll point out I have a lot more stuff, like big fancy words that are nice, but I'm not gonna say any of them, so don't worry about it, but come talk to me later if you're interested. Let's get to composition, okay? So object composition, I'm gonna assume that this is somewhat familiar. I'm not an expert here, so I will probably give a bad definition. Also feel free to argue with me after if you're so inclined. But just to kind of sketch this in like level set terminology, so we have a reference point when we look at this through a functional lens in a second, let me quickly go through kind of what I mean when I say these words. I'm gonna say that in a little composition refers to one object holding a reference to another and using that to perform its duties somehow. I'll have an example if that's not helpful. You may have heard like favor object composition over class inheritance somewhere. Usually when this is talked about in relation to inheritance, it's a has a relationship instead of an is a. It's not specialization, it's a has a. Let me try and make this a little bit more concrete just some examples. I'd like to model a system that represents topologists unsurprisingly. And the important thing that I want my topologists to be able to do is tell jokes. So here we go, right? If you're an object oriented programmer, you probably start with something like this. Like what is a topologist? Well it's a noun, I used a noun, so I probably want a class, that seems reasonable. And how do I represent a topologist? Well they need to be able to tell jokes, so that's a method, and then they'll do it somehow, right? It's not super important for this. Presumably the jokes come from somewhere, I'm gonna assume that it's not the topologist's responsibility, they'll just remember all of the jokes they've heard over their life. But anyway, so we have this object, we can use it, it's fairly straightforward, right? Let's say we have a topologist, we'll initialize them by passing, you can pretend the list of topologist jokes is just a static array, it's probably pretty accurate, I don't know, a lot of new topology jokes being written day by day. But yeah, so here we go, right? We have a working object. And we run it and you get a joke. It's an okay joke. Here's a Klein bottle, just for reference. It doesn't have an inside, or it's all inside or something. Sorry, there are a couple more of these, just go ahead and hear with me. Okay, so this is all well and good. The important thing though is we wanna understand how our system extends, right? So we have our topologists, now let's say we wanna model algebraic topologists in particular. And algebraic topologists are kind of notorious for not being terribly interested in things that aren't group theory, and maybe have a short attention span, I don't know. So here's my swing at modeling an algebraic topologist. An algebraic topologist is clearly a type of topologist, it's right there in the name, right? So that's a subclass for sure. And they have a particular way of telling jokes, namely they're gonna have their same collection of jokes, but they don't care about a lot of them. So they'll just grab one that they actually care about, reasonable enough, and it works fine, right? Nothing wrong with this. So if we have another topologist, another joke, here we go, you ready? It's as we later. And we can keep going, right? Like, so if we have a loud topologist, just a simple example, right? They pick their joke in the usual way, it's kind of topologists, but they have a particular style of delivering it where they just shout everything really loudly all the time. And again, works fine, no problem. If you've seen nothing as something, you already know what problem's coming up next. And it was kind of the genesis of this part of the talk was thinking that you sidestep a lot of these sorts of problems in the functional world. But now what happens when I wanna write, oh, I'm sorry, I skipped the joke. It's okay, this is not a good one. Again, ask me later if you care. All right, so the problem comes, the classic diamond problem, right? What happens when you wanna write a loud age algebraic topologist? We have specialized two different directions and don't have a good means of recombining those. We have options, they're both terrible. We don't wanna do it. So what's a different approach then? Like how can we approach this, not through inheritance, but through composition? I'll mention there are a couple of ways. If you've seen nothing as something, you may have one in your head already. I'm gonna do a different one. I don't claim that it's particularly good, but it is an example of object composition. So here's another thought at this. Let's take my topologist exactly like I have it, right? But the instant that we need to extend it, we go, all right, instead of trying to specialize, let's create a new class. And I'm gonna call this class an algebraist. So here's how algebraists work. No longer will an algebraist be responsible for managing jokes. Instead, an algebraist just needs to hold on to a topologist friend. And when the algebraist has to tell a joke, they're gonna ask their topologist friend what jokes they know, pick out one that they like and recite it, right? So this is what I mean when I say has a, right? Like this algebraist holds onto a reference to another person that has the jokes. And that works fine. This is the last one. Because seriously, all the other ones I know are long and you don't care. So yeah, right, this works fine. And the important thing here is we layer in this extra functionality by using this object that kind of wraps our original topologist instead of subclasses. We can continue in this fashion, and I will for a moment. Our loudness can be layered on by another layer, right? A loud speaker holds onto a reference to a jokester. All I really care about the jokester is they know how to tell a joke and I'll just modify the joke that gets told on the way out. And that works fine. And the nice thing is with these objects, I have the capability of creating objects in my system with all of the combinations of behavior that I care about. So that's good, all of these do work. Some observations, right? Because I have this flexibility using objects in this way, we get to parametrize the behavior and swap it out reasonably well. And don't need a different class for each kind of combination of behavior. There are definitely some problems with this particular approach, though. One is I just wrote these things and sort of have implicitly assumed that algebraic and loudspeaker are sort of equivalent wrappers and just haven't write up. There's an error here. Did y'all catch it? Like it's easy to overlook. And I literally just wrote this and it's not complicated. The error here is that algebraic needs the object that it wraps to have a full list of jokes that it inspects. And loudspeaker only exposes a tell joke method. It's an easy mistake to make. We assumed something about the APIs just implicitly from context that's not true. So it's still pretty brittle. And my way of writing this is probably not the way that other people would come up with. So if we try to make implicit assumptions about how our objects work, we're likely to run into errors like this, right? And this gets worse the more different choices we had to make along the way. I'll admit, like if you're a design pattern expert, you probably have some better tools for managing this and those are great and wonderful. But I'm kind of interested in, in the functional programming that I've done, this has never really been an issue, right? Like there's only, usually there's just kind of one thing that you can do and you do it and it works and it's nice. So let's switch gears a minute and kind of talk about function composition and how it works and then see if we can pull some of those benefits back to Ruby land. So function composition. Warning, there's gonna be some like high school math here. So I apologize if that upsets anyone. I'm gonna try and keep it real brief. I think we're probably okay. So functions, y'all remember functions, right? Here's standard function notation, right? F is like you plug in a value for X and it returns X plus two. So when you plug in a number, you just replace it and you get three. There's another function. I can also plug in numbers, I get a result. And here's the important part that I actually care about. I can compose these things. So here's the math notation for composition, it's just a circle. So the composition F composed with G when I plug in a number, it's a pipelining, right? It's a data pipeline. First I'm gonna take that value and plug it into G and take the result and plug that into F. So by definition it's that, which you can compute out. I'm not actually interested in the number there. So if I were to write all of this stuff in Haskell, here's what it would look like. And part of the reason I like Haskell is because a Haskell function is a lot like a math function, right? They work about the same. I don't remember functions in math class ever not working because the network was down, not a thing. You don't have to worry about it, they're just good. So we have some type annotations. This is what they look like in Haskell. I'll leave some of them throughout. But the important thing is you have a composition operator and it works exactly the same. The composition operator in Haskell is a dot, which if you think about how ubiquitous those usually are in any programming language should tell you something about how important Haskell thinks composition is. But it works exactly the same as math functions, right? F composed with G is the function that first feeds a value through G, takes the result, feeds it through F. So by composing those two, I get a new function. And this is really the key important part, right? It's not just shorthand for not having to write parentheses. It's a means of composing more complicated bits of functionality from simpler ones. So F composed with G is, you know, it is a new function and I can use it just as well as any other function. I can write out an expression for it because it's math, but I don't need to, right? I can just describe it in terms of that composition and use it the way I would anything else, which is nice. Interestingly enough, composition is itself a function. Composition is a function that takes a function and gives you a new function. And you can puzzle through the types there and make sure that lines up, but you do them in the reverse order and then they line up and go A to C. Works out, I promise. So there's some like Haskell notation. Lest you think this is all just academic and for doing like high school algebra, here's kind of what that looks like in a real world, like web server sort of thing. You know, in the web server world, we can think of our app as a function that receives a request, and the way to read that type signature is it receives a request, does some IO and eventually returns a response. That IO could be talking to a database, writing to a file, launching missiles. It's very scary, we don't know, but it takes in a request and returns a response and does some stuff. Middleware for an application, right? Like what middleware does is it takes an existing application and wraps it in some extra functionality. So you know, adds something. You can think of middleware again as a function that takes in your existing app and gives you back a new app that's a variant of the one that you have. So you know, I've written plenty of code that looks about like this. You know, my development mode application is take my base application and run it through a middleware chain where the middleware chain is itself a middleware, right? I have a collection of functions that take an app and return an app so I can line them up end to end and compose them and get a new piece of middleware that I can apply. I've expanded out the type on serve static just for reference there. Serve static takes a string. It's like the path that would serve static files from and produces a middleware, right? So if you were to walk all that all the way out, it's a function that takes a string and returns a function that takes a request and returns a variant response and then returns another function. So like oftentimes unpacking those types is actively unhelpful, but that's okay, right? Like we can work at this higher level of abstraction and descend when it makes sense. This is actually super similar to how rack works, right? Slight differences where you take an environment hash and get back a tuple of some things, but same basic idea. And if you have ever been delighted by the way that like rack middleware composes, it's because it's just functions, mostly. Anyways, I have some advantages I'll mention. I'm gonna skip this to you in the interest of time. Although do you wanna say one thing? One of the things that makes this work is these functions are pure. I don't like that term. I think Jessica Care has a better term that like data in, data out, right? These are high school math functions that take a value and turn to value and that's it, right? When you have functions like that, your life is pretty simple. I'll skip this, I'll mention kind of more of the advantages of these sort of actual like functional functions as we go along. Okay, so like having seen a little bit of Haskell, let's consider that topologist modeling problem and see what it would look like in Haskell. Oftentimes, I think the hardest part for me is just figuring out what kind of thing it is that I'm trying to write. Like so often in the object oriented world, it goes, oh, I need an object, I'm gonna start writing a class and kind of jump in. If I'm doing this in Haskell, I know that my goal is a function but I kind of have to think through, like for our web server, it was natural to have a function that took a request and returned a response. What's the corresponding thing here? And I think it's important to think more about kind of responsibilities, right? What are the verbs? What are the things that your system is doing? So in this world, while a topologist is a multifaceted being that can do lots of things, including topology and not just joke telling, the thing that I'm interested in is their ability to aggregate all of their some experiences and produce a joke, which I will model as a topologist as a function that takes the list of jokes that they've heard and returns the one joke that they want to tell right now. For simplicity's sake, I'm gonna model a joke as a string. So really, I'm just trying to build a function that takes in a list of strings and returns a string. I should again say for folks that have done some Haskell, I am gonna align some details about empty lists and shuffling requiring IO and things like that. Again, I will post completed code with those full notes in it at some point down the road, but take this as a sketch. So here's a possible implementation of the topologist function that's equivalent to the one, like our object we had before. So given a list of jokes, what the topologist does is shuffle this list and takes the head, the first element of the list. I could rewrite that differently. Notice I am taking the output of one function and feeding it through another. So I could write that thusly. Or equivalently, I feel like that goes backwards from the way I wanna think about it a lot of times. So I will often, for the purpose of this talk, write a double arrow as just a reverse composition, just composition in the other order. So I could also write it this way. Now, here's the powerful change in perspective. I have described this topologist function by saying what it does to a value. I would like it to get away from that where possible. And if you think about high school math class where you just cancel outside of an equation, we can do the same thing here in a way. This is totally equivalent to writing this. A topologist is the function that you get by chaining first you do shuffle, then you do head. This style is called point free notation. Also point less, if you're feeling snarky. Right, it's just, we don't talk about values, we talk about building blocks and composing them. And I find that illuminating. Okay, so there's our topologist function. It is, I think, as simple as it possibly could be, right? There's very little ceremony here. So let's keep going, right? We wanna write our algebraic topologist. Here's what that could look like. Same basic type, right? It takes a list of strings, returns a string. The one little wrinkle here is that our algebraic topologist cares about a subset of jokes and filters them down. Filters just like a select on a Ruby array. And again, I have written this in terms of the jokes, but I really can kind of drop that and say, an algebraic topologist, what they do is they follow this three-step process of filtering down the jokes, then shuffling, then picking one, right? It's a high level description of what's happening. And I find that helpful because it really illuminates the similarity and differences between a topologist and an algebraic topologist, right? Those are structurally super similar, and that kind of throws into relief the difference, right? I'm, my default topologist is missing something here. So if I wanna unify these, right? If I want one thing that can produce all of these different kinds of behaviors, it becomes fairly clear what to do, right? I mean something that looks like this, right? A topologist is just a person that thinks everything's funny all the time. That's not accurate. My model has gone wildly astray, I'm sorry. But yeah, so like there we see the structural similarity and it kind of makes it clear what to do next. All right, this is the pipeline that we have for all of these. So if I wanna create a topologist, well I'll need to supply the criteria that they use for judging a joke, but once we do, there's my pipeline, right? It's exactly the thing I've written thus far. So kind of, yeah, from there we can recover our original implementations. I should say these, the backslash arrow thing is a lambda, think a stabby lambda in Ruby. So I can kind of recover my original definitions. But importantly, I have this higher level make topologist thing and conceptually I'm thinking of that as it's a function that takes in a sense of humor and returns a topologist. Really what that is is this thing which again is I think a little bit less enlightening. And we can keep going, right? A loud topologist, same idea, right? They have a similar pipeline but end up shouting it when they're done. And so if I wanted to make my general make topologist function also be able to produce this thing, well I just need to extend my pipeline. So I can do that, right? Now I'm thinking of make topologist as a function that takes in a sense of humor and a mode of delivering jokes and produces the topologist that has those properties. So that's the actual type, but ignore that. And it's fairly straightforward to write. I'll need to parameterize each topologist by those two bits of behavior. And I can do that. I'll point out Haskell has some niceties for defining functions like this. The function that takes the value and just returns it is the identity function, just ID, so I'll write that in some places. Cool, so I like this, right? Like we get everything we need. We have all different kinds of behavior and I'm not kind of, I'm free to reuse them in any combination that I want, which is nice. So I'd like to start clawing this back towards Ruby. My first thought is like we could just try and transliterate some of this. So let's look at that very briefly. If you had a function like this that just multiplies by two in Haskell world, I think the closest just like transliteration of this function to Ruby would be it's a lambda, right? There's no classes anywhere. So just stabby lambda that returns. If you haven't used lambda as much, the way you use them is you can call the call method. I'm apologizing in advance for how many times I'm about to say call. But yeah, you can call call and it executes the function and evaluates it. That's fine. If we consider something a little bit more involved, if a function takes three arguments, the interesting thing in Haskell, this is actually a function that takes an argument and returns a function that takes an argument and returns a function that takes an argument. I've lost how many times I said that, but you get the idea. So the actual transliteration of this would be something like this, which is horrible and I don't recommend writing in Ruby. All your coworkers will hate you, but it works, right? And incidentally, I found this out recently. You can curry procs. One fact. Anyways, so while this matches kind of the letter of what we wrote, I don't think it captures the spirit of the solution that I ended up with. So let's take a step back. What I want is a function. Right, yeah, so here's the literal one. It's a little bit harder to read because I'm mixing procs versus methods and so it doesn't flow right to left, but you can convince yourself that's equivalent. But I think that misses the spirit of the solution. Again, what I ended up with was a function that I can parameterize a sense of humor and a delivery method and I get a topologist back. So I still want to think about a topologist as an object. Here's my swing at modeling that. Let's say every topologist has their own sense of humor and delivery. This is the object composition solution that I prefer and that you may have jumped towards initially. And when I initialize my topologist, I will provide them with that sense of humor and delivery that they will hang on to and use to do the thing that they need to do. I'm gonna freeze things because I've been doing too much Haskell, but it's a good policy just in general if you can. So then when the topologist needs to do its thing, I'm still trying to pretend that it's a function. Still trying to keep things as similar as possible. So I will define a call method on this topologist that takes in a list of jokes and does the exact same behavior. So isomorphic, right? But this feels a lot just more ruby-ish to me, right? This feels more natural and an actual ruby code base. And I think there are some advantages to this solution. Here's what this looks like in practice, right? When I make my topologist, my loud topologist, I can kind of pass in, like here's your sense of humor and delivery method, just using some stabby lambas again, because those are functions. I guess another advantage is when I was thinking of that loud topologist as a very outset, I thought of it as a specialization, I thought of it in terms of a loud topologist is a topologist with a particular kind of delivery. With this setup, that becomes pretty easy to write, right? I can define a with method that just gives me back a new version of the same object, doesn't manipulate the existing object, just provides a new one with whatever overrides I wanna provide. And that would let me write something like this. A loud topologist is a topologist with a particular kind of delivery, right? That's exactly the thing that I was thinking when I started. Also notably, the only thing I care about about delivery and sense of humor is that they have a call method, right? Prox have that, but they're not the only things. In particular, I can write anything I want that has a call method and pass it in, right? So if I wanna think about a sense of humor as an object, I can, right? If I wanna nest some complicated logic there, I can, but it's still kind of API compatible with a proc, so I'm free to mix and match them. So I can write an object that represents the sense of humor that an algebraist has and create topologists with that bit of behavior injected in, which is nice. So kind of trying to package that up into some rules. Not a huge fan of rules, but if you wanna try that sort of approach, here I think are some guidelines to follow. So the objects in a system like this, these functional objects all have a defined collection of fields and the state is entirely expressible in terms of those fields. They're immutable, right? The only time that you can't update things, you only update, you create a new object, but you kind of only ever update to reconfigure anyways, so that's not a huge performance problem. Really, these objects are just kind of steps in your data pipeline, so you tend to need relatively few of them. And if your objects are being functional, they should respond to call to perform their primary responsibility. You get a lot of benefits working this way. In a system that follows those sorts of rules, all of our objects have a defined responsibility, they have one public method called call. There's really only one thing that you can do with them and there's not much of a way to confuse it. They have a predictable interface. The only thing that you care about is what type of thing do I need to feed in and what type of thing do I get out? They're immutable, nice, injectable dependencies, easy to configure, all of these nice things. And this is not new. I think it's compelling that, again, starting from a functional perspective, you end up writing object oriented code in a way that a lot of object oriented people just naturally stumbled upon anyways. The really nice thing, I think, when you have a system written this way because all of your objects are so predictable, they're easy to swap out, right? Like I can plug in a proc somewhere, and I can mock out things very simply. If I want to move logic around, I am confident that there aren't side effects anywhere, so I can just snap a piece off, plug it back in, as long as the structure is correct, right? As long as all of the types line up, I can just move things around all willy-nilly. It feels like playing with Legos. That's great, that's me, tiny. Now, I've said types a lot, and I think it's reasonable, if you haven't used a strongly typed language, you may have some objections here. I do not necessarily mean anything about a compiler here. I'll point out, like, if you've been doing Ruby, you've been thinking about these sorts of things, just these maybe aren't the words that you used to describe what you've been thinking about. If you've ever written a method like this, what you're doing is you're starting with a request, you're feeding it through a parse input function, you're feeding that through a persist function, and you're feeding that through a success-with-ID function. Like, this is a composition, whether you have thought about it as such or not. And you were assuming some things along the way, right? You're assuming that whatever parse input returns is something that persist can take. So, I would prefer to be explicit about that, right? Like, let's name those things. Parse input takes a request and returns a post, persist takes a post and returns an int, and the process that we're following here is chaining those individual steps one after another. Now, you probably don't quite think about it this way, like, you don't typically think about types when you're doing Ruby, you think about duck types, but on balance ends up being fairly similar. You don't really care that parse input returns a post so much as you care that parse input returns a thing that has all of the methods that persist needs, whatever those happen to be, right? So really you're kind of thinking of it as a persistible, and that's super powerful, right? That's a thing that's actually kind of harder to emulate in Haskell a lot of times. But still structurally somewhat similar. Like, whether or not you want to think about these types, your functions do have things that they can accept and things that they can't. And you've thought about violations of this all the time. You know, if you change up your function definition, something like this, where my persist function might give me back an ID, but might give me back nil, then I've broken my pipeline. Something downstream can break. You may not have thought of it in terms of these functions don't compose anymore, but on balance that's what happens, right? The problem is now persist maybe gives you an int, but maybe gives you nil, and that can cause some problems, whether that be compile time or run time, who cares? Again, in the interest of time, I'm gonna skip some of this testing stuff. But yeah, so don't shy away from types. I don't mean that you need all of this at compile time because there are latent types in your functions, and I think it pays to think a little bit more carefully about them. Let's try and look at kind of a real world example. So here's some code that I pulled out of a project that I was working on for the Peace Corps called MedLink. The basic goal of the system is, volunteers out in the field don't often have reliable internet access, so they have to like text in, like hey, I need Tylenol and bandages, and we'll put it in a system. So this is some code that I had written a long time ago. This is a service object that does that taking a text message and placing an order. It's not bad, right? Like if you're writing code like this, awesome. There's some big wins here, right? There's a nice clear single responsibility because there's only one public method. It's a high level description of what the thing does. Decent names for everything. I do have this extracted and not in a model somewhere, so I can reuse it, and I needed to, right? Or admins needed a tester where they could just mock out one, and I was like, oh, I've got an object for that. It's great, it's ready to go. But, our analysis of problems kind of maintaining this kind of two-fold. One, this is not in any way shape or form isolated from the database, so eventually my test started getting real slow because I couldn't see the dependencies to swap them out. I also couldn't see the interdependencies between these individual steps in the process, so down the road when we made a web interface, and I needed to pull out the last three steps, that ended up being way harder than it seemed like it should be because of hidden interdependencies, right? My parse message set some state that some but not all of the later functions needed, and that just wasn't ever clear, so refactoring was sort of a pain. I'd like to get away from that, right? I get it, I want Legos, right? Just little pieces I can move around. So, I think the big lesson learned here was that I am composing things. Like same sort of idea, these steps run one at a time, and there's data that's generated one that's fed into later ones, but it is completely implicit. And when your composition is implicit, so are the boundaries, and so are the values flowing between those boundaries. And that can make it really hard to reason about what's happening. I always prefer to make those things explicit, even if it's a few more letters, right? It's worth it. So, let's be clear about what's happening, and let's try and name those intermediate steps. Sometimes that requires making up a new class for your intermediate representation, that's fine. So, let's think about this in terms of a pipeline. You know, maybe it's not clear, maybe this forces you to clarify your thought a bit. But, you know, here's a stab at a pipeline, or I start with SMS, I convert that into a parse result that has some data. I build out an actual, like, order domain object from that, and then do some other things that seem sketchy. Come back to that. And then finally, when all is said and done, and I've placed the order, I convert it to a string that the user can see. So, I wanna write something that looks like that. And I can do that, right? Like, I can just be explicit. All right, yeah, okay, here we go. So, that works, right? This is the idea, I'm trying to write Haskell and Ruby though, not Lisp, so this looks bad. And I'd like to be a little bit, like, you know, again, the data flow seems backwards. You know, I'd like to be a little bit more explicit about what's happening here. So, let me try this. Let me see how y'all feel about this one. Let's say that, like, let's say that what this does, like, what this order plays or does when you call it, oh, I should, I'm sorry, I should say, the fields there is just shorthand format. Like, every method has some adder readers and an initialized method that supplies them and then freezes and fields is just shorthand for, like, these are the fields that I parameterize in this object. Usually, like, for these, it's here are the dependencies that this thing has. So, like, when this thing runs, here's what it does. First off, it takes an SMS, so I only need one of these. I always feed new objects through. I'm not holding on to a reference. And I'm just gonna feed it through that pipeline. There's a tiny little bit of cleverness here, but, you know, take a look at the compose method. What that does is, you know, composing a sequence of steps gives you back the function that applies the methods one at a time. I have a little bit of cleverness to let me write a symbol naming the method and then pulling out the method. But remember, all of these objects are frozen, so all of those methods are really just kind of state-free and this is totally fine to do. Not gonna cause any confusion, hopefully. For reference and for saving space, I will probably write reduce from here on out. If you're not comfortable with reduce, style of stuff, don't worry, it's the same thing. Just saving some space there. So that's equivalent, right? Maybe not a huge win, it is literally equivalent. But just, here's the starting point. From here, like the implementation of the individual steps, your parse message, your build order, would look something like this, right? They take an object, they return an object, that's it. I guess that is sort of a lie, right? I have these dependencies that I've injected in and at some point I will need to save things to the database, but at least they are a little bit easier to test because they're a little bit more explicit now, it's nice. But I still am primarily interested in that main line flow of data through this service object. Here's roughly what a test could look like, right? You may inject in some dependencies. Some of them may be service objects that are product compatible. They don't all have to be, it's fine. But the main thing you hear about is like, when you pass a message, here's the response you get. And possibly my dependencies receive some messages along the way. So that's fine. The thing that I'm primarily interested in is, with this setup, when I want to extract out just the order placer, right? Just those last three steps, it's almost trivial to do so, right? Here's the extracted version. Now I have a new dependency. My SMS order placer depends on a base order placer that will be provided somehow. I'm not gonna worry about it right now. And my pipeline is just composing those through. There is a little wrinkle in that I'm now not naming the thing. It's an actual, like, order placer is an actual factual proc. So my little compose needs to respect that and just call it if it's already callable. But other than that, like, everything is unchanged. And the only thing I had to do, right, the thing that's important there is the order placer has the same shape, right? Has the same input and output types as those last three steps composed. So it swaps in and I don't have to think about anything else. The order placer itself is, sorry, somewhat similar, it is literally just copy over the last three steps, right? Everything's state free. So, again, as long as the input and outputs line up, it all works. Feels like like us, it's great. So that's all well and good. There is one kind of important lie that I've told along the way that, right, this is all talking about what happens when all of your types line up and everything composes end-to-end is nice, right? If I have a nice clean flow of data throughout, then I can compose the whole thing and that's good. That never happens though, like ever. What actually happens is stuff fails. Stuff fails all the time, right? Any number of computations may fail to produce a value, some kind of error or something. Let me model that the simplest way I know how, which is when one of these steps fails, it's just gonna return nil. So, right, my types are now different along the way. Maybe I have a partial-ult, maybe I have an order, maybe not, it could all be nil. So, right, the shapes don't stack up anymore, right? I end up with some extension that sometimes is a partial-ult, sometimes isn't. And I still wanna get from my initial SMS to the end result. I can't do anything about the fact that something could have failed along the way, right? I could end up with a nil, that's inevitable. But if nothing failed along the way, I would like my original value back, right? But I can do this, right? Think about, I can apply my first function just like normal and then I'll end up with, well, one of two things. Either I have a nil or I don't. If I don't, cool, I can feed it through next step, next step, next step. If I have a nil, also cool, I just need to pass it forward, right? If anywhere along the way I ended up with a nil, just keep that nil. I want the end result of this composition to be nil. Here that is in code. The only change is, as we're composing these methods, we look at the result from the previous step and respect the fact that it might be nil. If it is, cool, we end up with a nil. If not, we call the next step along the way, right? And the only thing that needs to change in my actual individual steps is the following. I can just return, right? If I didn't have a result, if something went wrong, I return nil and I'm done. And I can trust my composition operator to halt the process there, right? The individual steps don't need to worry about that higher level context. Again, I can validate what that was some tests, right? If I pass in an invalid message, then I won't even get so far as calling the database save order thing. So I have recreated error handling but purely out of function composition, right? That's the powerful thing about these abstractions. I don't need a lot else. Just need to compose functions. Let's crank that up a notch, right? Realistically, when your functions error, you probably wanna know why. So there is a thing for that. It's called either, usually in the Haskell world. So the idea behind an either is when something goes wrong, you're gonna return some object that represents that. I'm gonna say a string, just like an error message. So along the way, each step of this process needs to acknowledge that fact, right? It could return an error message or it could return the actual value. I'll need to be a little careful here to differentiate between a successful string value and an error string value. But modulo that, it's the same, right? I have these cross arrows where I don't quite have a parse result, something could have gone wrong. I don't quite have an order, something could have gone wrong. But it's okay, cause there's still a sort of natural thing to do, right? If I have an error at any step along the way, propagate it forward, if not, unwrap the value and feed it through. So I can still get from there to there. Here it is in code, right? For me, I'm representing an either as a struct so that I can differentiate between a value and an error. And then my functions just need to return those enriched values. The actual composition operator also needs to adjust a little bit, but not a ton. The thing that's different now is my result could be either an error or an actual value. If it's an error, cool, pass it forward. If not, unwrap it, feed it through the next step along the pipeline. I have a similar example with logging that I'll post in the slides where the logger kind of builds up a string as it goes or builds up an array. It accumulates an array of messages, but I want to kind of like in these last couple of minutes here, try and give a name to the commonality between all of these, like what? So it's like some baseline observations, right? All of these things, error handling, logging more, a whole lot, we can build entirely pure functions like we don't need any sort of extra language constructs. And every single one is just defining a slightly different means of composing functions. I don't use instance methods anywhere other than the cleverness about method which isn't really necessary. So this is really like a generic thing. And what I probably should have written is like a compose using maybe, a compose using either, a compose using any of these different strategies. So let's, here's how a category theorist sees all this stuff. What we have with any of these examples is I don't know why that's a larger arrow. That's irritating. I'm very sorry. It looks fine here. Okay, anyway. Like we have these composition pipelines, right? And kind of sitting above each of them, we have these values with extra context. And this could represent a whole number of things, right? Like MA could represent either an actual A value or an error object. It could represent a list of A objects. It could represent a promise that will eventually resolve to an A object or maybe error who cares. A probability, there's a whole bunch. There's a whole bunch. This is scratching the surface. But the commonalities here are we can kind of play around with composition, right? Like if we have sequences of arrows, oftentimes what we'll find is if we have these kind of like ground level functions, we can lift them to this extended context. The word for that, just for your reference if you ever talked to a functional programmer is to say that M is a functor, right? If you can lift these down-stared values upstairs, M is a functor. That process of moving up is called lift or map and an exercise if you wanna think through this. Lists are a functor and map, the operation of moving upstairs is map like the function on lists. It's a good exercise. Similarly, some commonalities. Along the way, each step, we had some means of embedding our kind of ground level value in this extended context, right? You know, if we have a C, then that is also a C or an error. If we have a C, there's an obvious promise that we'll just immediately resolve to that C. There's kind of always that extra thing. These lines in functional programming parlance are called return, which is immensely confusing or pure and an important thing about this, like this diagram is commutative and what that means is like if you think about these as just paths, we have lots of ways of getting from A to M, E. But the nice thing is it doesn't matter. We can do whatever we want and be guaranteed that no matter how we compose our functions, we'll always end up with the same result. We don't have to think about it, which is nice. I could go this way, I could go that way, it's the same thing. So it makes sense to talk about the arrow from one point to another. Those are fine. Really, the thing that we saw over and over in these examples was we often end up with these kind of functions that return extended context, right? Functions that might error, functions that return a promise and we want to chain promises together, right? Lots of examples of those. The important thing that we saw throughout those examples was that we had some way of taking those functions and still cross composing them, right? Even though they don't quite line up, we still could kind of promote and fudge and lift somehow and end up with a chain of values flowing along in this higher level context. The functional programming word for that is a monad. If you can do that, that's what a monad is. This moving up operation is called bind written in Haskell this way. And again, list is also a monad. I think it's instructive to think through what does that operation look like when you're talking about lists? So I'm at time, some parting thoughts here. If you've heard things like this, I think they're garbage, I'm sorry, that someone tried to harass you with these. The thing to keep in mind, it's an abstract thing, right? They're hard to define, but really a monad, the way I think of it, as monad is just a way to extend a computation composably. They are about composition, that's it. If you can cross-compose, you probably have a monad. If you have a monad, you can cross-compose, that's it. So yeah, some final parting thoughts. I'm sorry that this ended up being a monad tutorial. I feel like those are overdone, but I hope it was helpful. If you wanna try the style of Ruby stuff out on your own, I wrote a lot of these examples to be, hopefully explicit and understandable in this context, but don't reinvent the wheel. There are some gems out there that will help you with things like this. Also, if you wanna do a service layer, I think it's a great place to start because service layers are all about verbs and functional programming is good at that. You probably do want some sort of IOC container, those exist. Also, DryRB in general is largely written in a style very similar to this, so I think it's instructive to read through some of those gems. I'll be posting a blog post and references for these slides later. So if you are interested, follow me on Twitter or something, I'll come talk to you later. So yeah, keep in mind, how you compose I think is at least as important as the individual objects that you're composing. So just as you assemble your programs, do take care to be explicit about how things compose, intentional about what those boundaries and values are, and you'll end up with something that's pretty nice and extensible. That's it for me.