 The title of this talk is Boundaries. This is the only one word talk title at this conference which I'm very proud of. The next shortest is three words. Thank you. This is, some of the stuff in this talk is going to be very familiar to anyone who comes from certain functional programming backgrounds, but this is a story of me approaching some ideas that they have from a very different direction and from a very different history. So I am Gary Bernhardt. I look like this on the internet where sometimes I get mad and my Bluetooth is not working very well. I might have to forego it. I own a company called Destroy All Software that produces screencasts on various advanced software development topics. And to start us off in this talk, we're going to start with test doubles. There are a couple of talks about test doubles mocking and stubbing at this conference. This is not a talk about test doubles, but they are going to be part of my motivation. Just to make sure everyone's on the same page, let's go through a quick example of what an isolated unit test might look like. I have a sweeper class. This is in some kind of recurring billing situation. And if I have a user who is subscribed but has not paid in the last month, I want to tell him that something's wrong and disable his access. So when a subscription is expired, we will make a user, Bob. He's going to be a stub. He's an active user and he last paid two months ago. We will have an array of users that's just Bob for convenience. And before every test, we're going to stub out the user.all method to return that array of Bob. So this is one of the ways in which we're isolating ourselves from third parties, from other classes like user. We want to email the user when the subscription is expired. So we will invoke the sweeper and we expect it to call usermailer.billingproblem to send an email to this user telling him things are bad. So this is an isolated unit test. It's isolated because it removes its dependencies like user and like the usermailer. Hopefully my phone is back now. Awesome. Okay. The implementation of this is very simple. We will pull out all the users from the database. We will select only the ones who are active users but have not paid recently enough. And then for each of those, we will send the email. So very straightforward stuff. What we have here is a three-class system. These three classes integrate in production, but in tests we're removing two of the dependencies, replacing them with stubs and mocks, giving us this as our testing world. So everything is nice and isolated. There are several good reasons to do this. Several very big benefits that come out of it, but there's also one really terrible thing that happens when you do this. So let's go through those. This allows you to do real test-driven design. Looking at your test, seeing that you have mocked six things and two of them are mocked three method calls deep, this tells you that your design is not so good for this class. So it gives you a form of feedback that you can't get without isolated tests. At least I don't know how to. It allows you to do outside in TDD where you actually build the higher level pieces before the low level pieces exist. So we could TDD the sweeper using the user, using the user mailer before those classes exist because we're just stubbing them out anyway. Then when we want to write the user class for real, we can look at what we stubbed and that tells us the interfaces it needs. And finally, this gives you very fast tests. This is one of the main things in the whole fast rails tests meme or I don't want to call it a movement, but people getting excited about fast tests in the rails world. We're talking about the difference between a 200 millisecond time from hitting return to seeing the prompt back versus a 30-second time to run a very small test. It's a very big difference when you're really isolating. So these are all very good things that you want, but they are balanced out by a very bad thing. And that bad thing is that in test, you're running against a mock and a stub, and in production, you're running against real classes. And if you don't stub the boundary correctly, your test will pass and your production system will be wrong. And this is such a big problem that for most people, I think, it overshadows all those benefits. Even if you explain them to them, they're going to look at this problem and say it's not worth it. Now, there have been attempts to fix this. Various approaches to try to solve this problem in one way or another. One of which is to solve it with more testing, contract and collaboration tests. This is an idea sort of most closely associated with J.B. Reinsberger, who is one of the people who is most influential on my understanding of isolated unit testing. I've not actually done this, and something about it doesn't resonate well with me, but it is one attempt to fix this. There's also the tools approach. RSpecFire is a tool in Ruby that tries to solve this problem. If you mock a class with RSpecFire, it will make sure that you only mock methods that actually exist. So it makes sure that you don't cause these boundary problems, or at least you don't cause simple boundary problems. And finally, you can solve this with static typing, like so many things in life. It comes with all the same costs you pay to solve anything with a powerful static type system. But if you think about your mocks as being subclasses of the real class that just remove all the actual implementations, that gives you an idea of how static typing can solve this boundary problem. All of these only solve simple, canasense mismatches between objects. They solve things like, I called the method with the wrong name, like pass the wrong number of arguments. I don't solve deeper things like, my two algorithms that need to cooperate don't actually cooperate correctly. The way that you can solve that, and the most common way people try to fix this problem, is by just not doing isolated unit testing, by just integrating. The problem with solving the isolation problem with integration is that integration tests are a scam. I can't take credit for this sentence. This is once again J.B. Reinsberger. There's a talk called integration tests are a scam, which you should all watch. It's a really good talk that really lays out the argument for why integration testing doesn't work on a long enough time scale. And he nowadays uses a terminology, integrated test, to mean any test that's integrating multiple pieces. I'll give you the really quick and dirty argument for why integration tests don't work. The number of paths through your program goes like two to the N, where N is the number of branches or conditionals. And that includes tri-accept, circuiting Boolean expression. That includes a loop. Every time a branch is happening, if you have N of those, you have two to the N paths. And if you're trying to test the whole thing, you have a space of two to the N to decide to choose from. If you have 500 conditionals in your program, this is a number with about 150 digits in it. It's a very large space, and it's very difficult to effectively choose which paths matter, because they're effectively uncountable to you. The other problem is that sweet runtime in an integration suite is super linear. Whenever you add a unit test or whatever kind of test you're writing, you're also adding a little bit of code. So your number of tests goes up by one, and you make the system a little bit bigger, which means all your existing integration tests get a little bit slower. So every time you add a test, there are two sources of slowness, one of which is linear and one of which is something else I'm not sure of. But together, it's definitely a super linear runtime. And anyone who has a three-hour Rails test suite will be able to tell you that this is, in fact, the case. And they will probably not like their lives very much either. So that's all background. This is how I came to the ideas that I'm going to talk about for the rest of this talk. This has been a large focus in my software development career for the last five years, is isolated testing and figuring out how to do it well. So now let's shift gears entirely and talk about values. Values meaning the pieces of data inside of a program. If you want to test the plus method, and let's just think about plus on machine integers, and for whatever reason you decide you want to test it in isolation, so you don't want any other dependencies involved in the testing, what do you have to do to isolate plus? Nothing. It isolates for free. Plus doesn't have any dependencies. There's nothing to mock out. There's nothing to stub. It's totally local. And why is that the case? It's not just because plus is simple. It's tempting to say, oh, plus is simple, so of course it isolates for free. That is not what's happening. It has two properties that are necessary to be naturally isolated with no stubs or mocks. The first is that it takes values as arguments, and it returns new values, and it doesn't mutate those values. It just gives you a new value, right? It takes an integer in an integer, and it gives you an integer. The second property is that it doesn't have any dependencies. There's nothing to mock. It doesn't need anything else. It's a local computation that just produces a new value. So how could we apply that to more complex code that we work with all the time, stuff like the sweeper? Well, let's go through this and just impose both of these constraints and see what happens. Starting with the bobstub, we can't use a stub because we're not faking out any boundaries, so let's replace that with a user object, but not like an active record object, but like just a struct, a piece of data. Even a hash, I wouldn't use a hash, but you could just use a hash. We can't do the user.allstub because we're not allowed to, so we'll just delete that. And then the actual body of the test, instead of doing a mock expectation, we can just call the method and get back the array of users who are expired. Now, this does less than the original code, but we're going to get to that later. The implementation changes. We basically lose the second half. We now have a method that goes through all the users and finally the expired ones. This difference is huge. The difference between the original code and this is huge. The nature of the communication between the components has changed. Instead of having synchronous method calls as the boundaries between things, we now have values as boundaries. The value returned or taken by the method is the boundary between it and another object. Now, just as a quick digression, when I talk about values, I often mean things like this, a class that is a struct that has two fields, title and body, and it has a slug computed from the title, but behaviorally this is equivalent to a class that has a title, body and slug and computes the slug at creation time. They're basically the same thing. The only way to tell the difference from the outside is timing properties on the method calls. So I'm going to use these two ideas interchangeably, but really they're basically the same. So we've seen isolated testing as a bit of background. The idea of converting the code in the system is to communicate via values at the boundaries instead of via message sends or method calls at the boundaries. And now I want to look at how this fits into the three dominant programming paradigms putting aside logic programming, but how does this relate to procedural OO and functional programming? Here's a small piece of procedural code. We want to feed some walruses. So for each of the walruses, we shovel some food into its stomach. We shovel some cheese into the walruses' stomach. There are two properties of this code that make it very obvious that it's procedural. The first is the each. Whenever you see each in Ruby, there's something destructive going on. Each with a non-destructive body is a no-op. So there's something destructive happening and we know the structure of the walruses and the structure of its stomach. We know it has a stomach. We know the stomach can have things shoveled into it. We have knowledge of the internals. Contrast this with the OO solution where you still have an each. But now we tell the walruses to eat something. He knows how to eat instead of us knowing about his stomach. And then the eat method will shovel things into the stomach. Same code as before, just encapsulated. And my Bluetooth is dying again. So we have two paradigms here. Both of them involve mutation. One of them separates data and code that's procedural. One of them combines them into units called objects. If we add functional to this, instead of doing an each, we do a map. We're going to take all the walruses and use new walruses that are slightly different. So for each of them we're going to call eat on the walrus and some food, some cheese. And I'm going to use a hash for the walrus and a ray for the stomach and strings for the food. So in the eat function it's kind of weird, but we build a new stomach that's the old stomach plus the new food and then we build a new walrus that's the old walrus with the new stomach. You can see why OO models real world things a little better than functional programming does. Okay, so that's functional. Nothing is being mutated, right? So we have no mutation, but data and code are separate. They are not combined into single things. Now if you look at this table, obviously I've left a row, there's one more row to go. But even just looking at the variables, we have two variables. Does it mutate or not? Does it bind data and code together or not? They clearly vary independently, which means we have four possibilities. So what is the fourth possibility? Not logic programming, by the way. Here's what the fourth possibility looks like. We map, like in functional programming, so we're producing new walruses, but we're telling the walrus to eat something and that's not a destructive eat. Instead, the eat method constructs a new walrus that is the old walrus with the new stomach that contains the new food. So it combines the immutability of the functional code but it combines the merging of data and code together like OO does. And that is the fourth entry, and I call it lovingly FOO because it's not real OO. Now there's a problem with programming this way and that problem is that you lose the ability to do anything destructive, to talk to network, to talk to disk, to do any kind of I.O. You lose the ability to maintain state over time. So to reintroduce the idea of state, we have to add imperative programming back into this sort of FOO style of programming. We have to figure out how to compose the user database, the expired users class, and the mailer together even though the expired users class is functional in nature. So we have our expired users, it returns an array of users who we need to notify, and what we need to do is reintroduce the imperative layer around it, an imperative shell that surrounds the functional core. It talks to the database, it uses expired users to filter those users and then it emails each of the ones that comes out. So the imperative shell is a layer that surrounds the functional core. The functional core is the bulk of the application, it has all the intelligence, and the imperative shell is sort of a glue layer between the functional pieces of the system and the nasty external world of disks and networks and other things that fail and are slow. If we look at what's actually happening in these two things, it's not an arbitrary distinction. Even though all I did was cut the original method in half, the decision runs very deep. If you look at what these things do, the expired users class makes all the decisions, and the sweeper class has all the dependencies. So if we look at the way that that relates to testing, the functional core is heavy on paths, heavy on decisions, light on dependencies, which is exactly what unit testing is good at, especially isolated unit testing. When you take away the need to stub out the dependencies, you can just focus on the logic and the tests become very simple. The same thing is true for the shell. Lots of dependencies, few paths is exactly what an integration test is good at. Because it makes sure all the boundaries are lining up, all the pieces are communicating correctly, but you don't have a lot of test cases, which means you don't end up with a 30-minute or a three-hour test suite. Just to get a sense of what that integration test might look like since we already saw the unit test, maybe I create two users in the database, actually create them in an actual database. I invoke the sweeper, I pull out all the mails that were delivered by ActionMailer, and I make sure that only Alice was mailed. She's the only one who's expired here. She paid two months ago, Bob paid yesterday. But I only have to write one of these, whereas I'm going to have to write a bunch of the isolated tests on the functional core. So now we have a solution to the isolation problem for most code in the system, because we can build it all as functional pieces in this sort of fo-o style, where there are still objects, but they're not mutating, they're just taking values in and out. And we have a way to reintroduce the imperative part around it so we can actually talk to the outside world. And it turns out that this leads to all kinds of amazing benefits, not just the testing benefit, not just the fact that functional code is easier to reason about over time, but it even makes certain types of concurrency much easier. If we think about the actor model of concurrency, which is the one that I have the most faith in, as something sort of approaching a general-purpose concurrency style or concurrency programming method, let me quickly explain it to you, just in case everyone's not familiar. I'm going to do it with just threads and queues. So we have a queue, and this is going to be the communication mechanism between two processes. It is the inbox of process two. Process one is going to send to it. For process one, I'm just going to fork off a thread that is going to infinitely loop reading from standard in and pushing into the queue. Process two is going to infinitely loop reading from the queue and writing the standard out. This is an echo program that's communicating through a queue where the queue is the inbox for process two. If I just run this at the shell and start typing things into it, it's just going to print out whatever I sent in. This is the simplest way I know to explain the actor model. You have independent processes. Each of them has an inbox that is only readable by that process and they communicate by sending messages to each other, into each other's inboxes. The way that this relates back to Functional Core Imperative Shell, to FOO, to the idea of having lots of values is that every value in your system is a potential message, a possible message between two processes. Every value that is struct-like and can be easily serialized can also be easily sent over the wire. And this is a special case of the value is the boundary between the components. So if we rewrite our sweeper in a slightly different way, so we have a sweep method, it calls expired users on user.all, so it pulls everything out of the database, finds only the expired ones, and then for each of those emails, this is the Imperative Shell that you're looking at right now. The Functional Core is the expired users class. It's going to do what it did before, or the expired users method, excuse me. It's just going to filter out expired users. And then we have this very trivial notify of billing problem thing that just delegates to the mailer. Let's translate this into the actor model. For the first one, the actor that pulls everything out of the database and just sends them one by one into the expired users actor and then dies. If I didn't do die, then this would loop infinitely. The expired users actor is just going to pop a user off of its inbox. It's going to decide whether that user is late, and if it is late, it's going to forward that user on to the mailer process. And the mailer process is just going to invoke the mailer. So the Imperative Shell is sort of a bigger process. It takes a little while to run. It fires off all these messages to the smaller processes. And what we've just done is converted a program that could only use one core into a program that can use three cores, not on MRI, but on other VMs. We've parallelized this by doing very little work because we had the values available to send over the wire. Oh, I forgot to actually translate that. There's the new version. It's the same thing as the old, basically. Values in your system afford shifting process boundaries, but really, in general, values in your system afford shifting boundaries between anything, between class arrangement, between subsystem arrangement, between the way that you're building your program, whether it's serial or parallel. So this has... Programming in this style has surprisingly deep effects on the things you can do and the way that you can do them. That was a lot of stuff, so now I'm going to try to resay it in like three minutes to make it all tie together. In this style, you design your program as a core of independent functional pieces that take values and return values. The imperative shell orchestrates the relationships between those, interfaces them to the network, the disk, other nasty systems like that, and maintains state. For example, I wrote a Twitter client in this style. It's sort of a...it's a terminal program, but it's interactive like Vim would be. So you hit J to go down to the next tweet. The imperative shell sees the J, calls into the functional core to generate a new cursor position. The new cursor is generated and returned, and then the imperative shell updates the instance variable holding the cursor to be the new cursor. The functional core built the new cursor, and it was a purely functional operation. The imperative shell just updates references to these new objects as they're constructed. What you get from this is easy testing, especially isolated. You also get easy integration testing, and the distinction between which one happens where is a lot more obvious than it is if you just start throwing things against the wall and try to figure out what gets tested how later. You get fast tests. You don't have to do any weird stuff to get fast tests. They're just inherently fast because they're functional and working on small pieces of code. You have no call boundary risks. You don't have to stub or mock. You have easier concurrency, at least in the actor model, and you have a more fluid transition between concurrent and serial computation, and that's all just a special case of having higher code mobility in general, moving code between components, moving code between processes. So that is the end of the actual talk. Once again, I'm Gary Bernhardt. I run Distorial Software, which produces screencasts, and if you are a subscriber or want to become one, it is not free, but there is a screencast on Distorial Software called Functional Core Imperative Shell, which is the first time I ever talked about this in public, and the one that's coming out two weeks from now is also about this topic, expand a little more, and in that screencast I give a much larger example that I can't really give here, but I show you the Twitter client and how it's arranged and how the different parts of the system are segregated in this way. So with that, thank you guys very much for listening to me for half an hour. That actually went way faster than I expected, so I would be happy to take comments or questions or, yeah? Do you think there's any useful distinctions besides the functional bit between ports and adapters architecture? Right, that's a wonderful question. The question is about the relationship to ports and adapters or hexagonal architecture or these kinds of things. Yeah, so if you're building a large system that's going to be 30,000 lines of code, you don't want to have one functional core and one imperative shell. If you ask a Haskell programmer about doing this, they will tell you that, it just becomes a nightmare. I think that the ideal large system is actually many smaller systems built out of this in this sort of way. You have the functional pieces, you wrap them in a layer of scar tissue to interface them to the nasty outside world, and then you build a bunch of those that communicate in destructive ways. Does that answer the question? Um... Sure. There's no adapter in that explanation, but it's sort of... The adapters are the scar tissue. Yeah, exactly. That's true. I guess, yeah, to some extent, the imperative shell is just an adapter. Fair observation. Yeah, over on the side. How have you found success in using the active model in the computer and what libraries are taking for use? The question is how have I found success in using actors with Ruby? The answer is no. I have... So this Twitter client that I wrote to as I was figuring this out does use the actor model, but it's just threads and queues. I just built a little actor library. It's like 35 lines of code. A simple actor library is easy, a more complex one. I see diminishing returns if your VM isn't built for it. You can't spawn half a million processes in Ruby. Your machine is just going to go up, explode into smoke. Use Erlang. Yeah. So I'm curious to know if you run into other systems in trying to use this paradigm and bring in other gems and libraries at the same time. Let's say a traditional Rails app. How suited would a Rails app be to this paradigm? The question is how suited would a Rails app be to this style of development? The answer, once again, is no. It's not going to work very well. It depends on how large your Rails app is. The thing about a Rails app is if you're a Rails app has 100,000 lines, you don't have a Rails app. You have 95,000 lines of your application and you have 5,000 lines of Rails glue code, and probably what you've done is dumped those 95,000 lines into models, controllers, and helpers and failed to actually design your system. If you have designed a system and treated Rails as a small component of it that you want to mostly protect yourself from, then you might be able to do this. But to be honest, I've never even thought hard about how you would do that. I guarantee it's possible, but you're not going to transition your large Rails app into this easily. It's just really clear to me the response to that would be like, we do leverage a lot of previously written software that helps us a lot. And what we're proposing then is really is a pretty dramatic overhaul. You know, you can't leverage a lot as you make this good. You? With the imperative shell wrapped around the functional core, you can do whatever you want out there, right? So you can use, I mean, my Twitter client uses all tons of, well, not tons, it uses like six or eight gems, normal gems that are just, you know, work like anything else. And they're imperative in nature as well as in Ruby programs and Ruby programs tend to be. And I just put them out in the scar tissue layer and I let that be as big as it needs to be to reasonably allow me to use it. And then in the functional core, it doesn't have to see any of that stuff. This is the difference between just thinking about that sort of fo-o style programming, the functional o-o style, and then actually adding the imperative shell. The imperative shell is what allows you to work in this way. Yeah. So when you gave the functional example, you know, you create new wall arches and you turn it into right wall arches from that function. And sort of a true functional standpoint, right? You just return data.objects. So how do you feel about, like going to the next level, like, you know, there's nothing special out of wall arches that has a stomach, all animals have stomachs. So if you just return data with a stomach key, that would go around. And so, you know, like, how do you feel about going to the next level? I guess that's what I'm saying. Sorry, I missed the last sentence. I'm going even more functional, I guess. Well, I wouldn't consider changing from returning wall arches to returning stomachs as more functional. No, I'm not saying that. I'm saying it's returning a stomach key. Like, a data structure that is the animal with the stomach key that that would go around. So it's returning a data structure that goes to a data structure full of objects. Right. Well, if you look at the code, I used the word wall arches, but really there's nothing, especially wall arches, about the code. You could replace that with animal. And it wouldn't know the difference, right? It just knows that there is a stomach key and there is, inside of the stomach is an array of various foods. So it's not tied to the wall arches in nature of the wall arches. The user has a wall arches, right? The class of the wall arches is not an animal, right? I mean, I know it's an example. Well, no, I never mentioned a wall arch class. I could have used the word animal and it would have been the same thing, right? As long as it has a stomach, that code will work on it. I just used wall arches to make it more concrete. My point was, how do you feel about going just returning data to the data? The values are the data. Then, like, if the values are boundaries, then how about the return data to the object? Objects are data. If all the methods on an object are functions, then the object is data and it's indistinguishable from an object that is struct, that has everything early bound, right? Late binding only matters in a system with mutation in it. This is why, for example, Haskell is lazy. Well, Haskell's weird. Yeah, I don't know how else to say that. I feel like I'm failing to understand some part of your question. Okay, yeah. I can choose not to mutate things and, like, you've called merge and then return the half but the more intuitive, would be sure to do that, might as well use the squares operator out of the patch instance or you may have to call dot dot, I'm going to array, sometimes sort of get it awkward. Yeah, so, well, if we go back to the place where I actually did that, where I merged way back here, there it is. This was actually the functional example, right? If you look at the functional OO example, I just did wallace.new, which is a little more natural. There's not an easy way to say, I want a new object with only this field changed because Ruby's not designed for that, but that is easier to build in than it would be to build in, to replace all your core types. The nice thing about the Ruby core types is that the really scary things have bangs on them, usually, the mutation. It's not true for, like, delete, but the names are usually very obvious that they're mutating or they have a bang on them. I've actually not found a problem maintaining functional data structure manipulation code in Ruby. Your mileage may vary. Yeah, in the back. So, my question is, you kind of created a class in which an operator found all the inspired users, and so, despite being your functional core and it has layered a whole bunch of these, and it has made it harder to follow exactly what your path is doing as you move through different algorithms to your pieces. It certainly could. You have to spend... What I've found is that the choice of which classes you have in the core is extremely important, the names of them and the way that the responsibilities are divided up. So, actually, I could pull up part of the Twitter client and show you guys a larger example. Let's see. Wait, where am I? Yep. So, for example, the cursor. Cursor. This is a piece of the functional core. It has state that includes the tweets in a list of tweets, and then selection is the currently selected tweet. So, this encapsulates all the behavior of the cursor, and actually, why is my keyboard not working? Some of this is gross. It's actually quite a large class. This is one of the largest classes I've written and started programming Ruby. It's almost 100 lines. But that's because it's really like a very small module... I don't know, dude laughs. It's like a very small module of functional code that's just sort of self-contained. And then, if we look at the actual imperative shell, this is the entire shell. It's 153 lines. Is that what that says? Can you guys read that? There. Sorry about that. So, let's see. Where cursor.something with tweets, no, starting at index. The shell is sometimes a little bit awkward. Here we go. So, here is the cursor actually being manipulated. When you hit J, it just reassigns the current cursor in the shell to the result of doing cursor.down. And if we look at cursor.down, all it does is construct a new cursor. So, the fact that I chose cursor to be one of the boundaries in the functional core is very important. If I had a tweet list and then was maintaining a selection separate from that, this would have been awful. It's very important to find those boundaries that make very small, cohesive functional components, but not too small. I mean, I showed you like three line examples in the talk, but that's because it's a talk. Really, you want pieces larger than that, but smaller than a whole subsystem. Does that answer your question at all? That was Chuck, right? Yeah. I can't see, but I can hear. It kind of answers the question. Okay. I think I would just have to dig in and play a little bit more and figure out what makes a good boundary and what doesn't. Yeah, that's the hard part. I mean, that's always the hard part, right? But separating things that do mutation from things that don't, gives you a starting point, and it's the best starting point I've found. It's not an absolute rule, but if you start there as opposed to some other arbitrary rule, I've found much better results for design. Other questions? There is no library. The Twitter client is not online because I stopped working on it because it turned out that Twitter's evilness is growing much like test runtime of an integration suite. And I lost confidence that I should build software that interacts with it. Sorry, Twitter employees. I assume there's some here. Yeah, so none of it's... Sorry? Fair enough. Fair enough. At least it scales. Okay, sorry. I think it's better that you're creating much new objects. Wow. Have you seen any performance implications of that because you're discarding it, not as well? Yeah, I mean, the Twitter client doesn't really have many performance concerns. I mean, it does... When it comes up, it's sorting through thousands of tweets. It remembers everything all the way back, and so it has to do a merge of what it has versus what it sees from the API, but it's not doing anything really big. In MRI, your life may not be especially good if you're doing tons and tons of allocation. If you're in the JVM, it's much better, right? And if you're in a VM that's designed to have constant object creation and destruction, it's going to be even better than that, and a VM designed for functional programming. I would guess that the Erlang VM would handle this very well, for example, because in Erlang, you're constantly making small objects and letting them be freed. So yes, doing this on MRI, if you have performance concerns, it's probably going to be a little difficult. But you can do certain types of caching, right? If everything's a value and immutable, you can always cache things because they don't change. So there are ways to work around the unfortunate nature of your VM. I saw a hand back there, yeah. What's the biggest thing you've built using this style and do you have any concerns that, as it gets big, the ability to organize the posts and stuff? Both good questions. What's the biggest thing I've built and do I have concerns about scaling this into larger projects? The biggest thing I've built is the Twitter client. It's not that big. It's about 600 lines. And I would not be up here talking about this if that were why I thought this is good. The reason I think that this is good is that it has... it has shades of both the actor model built into it, the idea of functional pieces that are communicating by passing values back and forth, and it also is a lot like the Haskell idea of using the IOMONAD to encapsulate state, which is a wonderful idea that scales wonderfully up to about 500 lines of code and then everything falls apart. You look at a 20,000 line Haskell program that does a lot of I.O. and you're not going to like life. This is why I say I think that the larger program is smaller ones built in this way communicating via channels, external to the process. But what I'm really trying to do is merge this idea of actors, merge this idea of the IOMONAD and bring them into the OO world using our terminology. I didn't talk about MONADs. I've only talked about actors at the end as an example. I'm trying to rephrase that stuff in terminology that we use so that it seems more directly accessible. But to get back to your question about larger systems, some of the most reliable large systems in the world are written in Erlang and probably most of the reliable large systems in the world are written in Erlang. Lots and lots of nines. Not like Twitter's three nines. We're talking about like eight nines. And the fact that they can build large systems that are that reliable using the actor model, even not even knowing what those words mean tells you that there's something there. That was a long-winded answer to a very simple question. Yeah. I wonder if there's an approach that you might recommend if one were creating a new Rails app and let's say they were creating a user model that was subclassing actor by their base. Is there an approach that one might take to try to experiment with the techniques you're talking about to try to isolate it? Right. Sorry. If you're building a new Rails application and you're doing things like you have a user to subclasses actor record base, how do you go about doing this? I haven't gotten that far yet. I have opinions about how you should be building that application but they don't involve this. That's a different talk called Deconstructing the Framework. But yeah, it's not clear to me yet. Give me a year or two. Others? Yeah. How do you deal with a case where the extraction starts to leak? So if an example would be in the sweeper, you're passing a user.all. But it turns out the database is really fast in doing the filtering that the expiration basis is. Yeah. One of the reasons that my talks tend to take half as long as when I practice is I forget to give all the qualifications. Like, for example, you don't want to actually do that, right? You don't want to call user.all and then filter it in Ruby. Most of the complexity of your application is not database querying, right? I mean, there's plenty of querying in a complex app but it is not the 50% of your application. It's a fairly small percentage. And I think that probably if you're using Postgres or MySQL or SQLite, that goes in the shell. If you're using something like Datomic, which is a database where everything is immutable, that can go in the functional core, right? It's just data structures. Datomic is just data structures. So it depends on the nature of your database and the more your components are designed to work in this way, the more it can move into the core. But it doesn't mean that pieces... It doesn't mean you can't do this if you have Postgres. It just means Postgres has to be relegated to the SCAR tissue, which I think is fine. 80% functional is a heck of a lot better than 0%. You don't have to get to 99%. Yeah, front row. What keeps you in Ruby essentially for functional? What keeps me in Ruby? Inertia, to a small extent. Also, I just don't like any of those languages. I have this problem where I can't not care about syntax. I really like syntax. And I've written... I wrote a lot of lists in college and I just never really enjoyed it that much. Python and Ruby are what I like syntactically. This is why I want to go live on a cruise ship and write a new language. How are we doing on time? It is 3.10. We can do a couple more, I guess, yeah. So if you see a way for us in Ruby is to fix these problems. Ways to fix these problems. Contribute to Rubinius for about a year until you know how it works in Ruby language. I'll tell you how to do it. You want persistent core types. You want core types that are designed to be used in this way. And from that, most of this will fall out pretty naturally. You probably want actors and lightweight processes and you're going to have to build a user land scheduler but it's not that hard, that's what Erlang has. And if you have a user land scheduler with lightweight processes, if you can fork 10,000, 100,000 processes easily and you have immutable core types, that's the way towards doing this 99% of the time or 95% of the time. Yeah, back right. I've already sort of asked the question before but it's someone who is very well acquainted with people from Twitter or the Twitter API specifically. I would still encourage you to open source this even if it doesn't execute. This is an example of this style. The question is, why won't you answer my question? No, that's legitimate. I do plan on putting this up eventually even though I kind of am not happy about Twitter. I struggle with the idea of encouraging people to write software that interacts with something I don't like versus demonstrating something that I think is good. So, yeah. Also, it's a little bit embarrassing. The shell is not actually tested at all. There are zero tests around it even though it's 150 lines long, which I think will give people the wrong idea. I have reasons that I did that but they're very hard to articulate in a read me that anyone will actually read. So, I'm a little torn about encouraging bad things. Middle right. I think that this is a little bit more of a meta question but since I'm interested in a lot of your ideas here, I'm wondering if you look at array languages and their approaches to concurrency versus thinking about things at the thread level. And I'm wondering, because I kind of think the whole idea of working at threads is just too low level for the long, far-fung future of concurrency. I'm just wondering what your opinions are on that. Right. So, the question is have I looked at array languages and dude believes that thinking explicitly about threads or I assume you mean processes as well, like any kind of explicit, yeah, sort of control. Yeah. Thinking about those explicitly is not the right long-term thing. The first answer is no, so that's easy. I mean, I'm familiar with J and all those languages. I don't actually know any of them. I've seen small snippets but I don't understand them. The second part about threads and arrays or threads and processes not being the right primitive, I guess would be the word, right, the right primitive to build on. I'm not convinced that that's actually true. I'm not convinced that they're the wrong thing. I assume the alternative you're thinking of is things like a parallel map, right, like implicit parallelism. You're still writing sequential programs to just have parallel pockets, whereas in the actor model, everything is inherently parallel. I mean, if it's even remotely reasonably decomposed, right, as long as you don't have one process that's doing a ton of work. So I'm not convinced that threads and processes are wrong. Well, I'm convinced that threads are wrong if you're sharing a state, but I'm not convinced that independent threads of control, independent processes of control are the wrong thing. Yeah. Have you thought about writing your Twitter application using a more open protocol like Ostatus? Writing the Twitter app against a more open protocol like Ostatus, I guess I could. It doesn't sound very interesting. That's a problem. I already wrote it once. I don't want to write it again. Maybe I'll put it on GitHub and I'll accept pull requests that put it on a more open protocol. I think, yeah, it is time. Thank you guys very much.