 There you go. All right, this thing on. Perfect. Hi, I'm Zach Dennis from across the lake in Michigan, from each of the human. This talk, Sam Piles and software, just kind of a metaphor I got interested in at the end of last year. And I think it applies to software development. So I kind of want to talk about that. Before we get into Sam Piles, though, I want to talk about this picture. We've got a picture of a small girl trying really hard to push this ginormous elephant onto this train cart. And in the picture, the girl is us. That's our software team. And the elephant is often our software. We're trying to kind of push around, but it's too big. It's too stubborn. It's too complex. It's extremely difficult to do. In Rich Hickey's talk on simplicity, he actually uses a synology. I'm not sure he uses the photo, but he uses a synology to talk about kind of the inevitable fate of software. At some point, it's going to reach the size of complexity. We're not going to be able to push it around anymore. So part one on this talk is going to be on the Sam Piles and kind of using that as a metaphor for how our elephant got so big. So back in the 80s, these three physicists were studying dynamical systems, which are systems that just, you can have a microscopic event or change, and that can cause macroscopic behavior or further change. And they notice something really cool about how this works in nature with Sam Piles. So the thing I want to start with is just macro from the micro. Now, how the Sam Pile model works is you kind of have a fixed point in which you're dropping grains of sand. You're constantly feeding energy into the system by dropping grains of sand into a pile. So my question to you guys is, as you continue to do this and feed the system, what eventually happens? All right, you get a bigger pile of sand. That is a correct answer. There's a second correct answer. Does anyone know what that is? Yeah, it landslides. It's just inevitable. It's always going to happen as long as you keep feeding that system. So the landslide is a stress reducer for a sand pile, and it happens automatically. The system can only sustain so much stress and then it's got to do something to reduce that stress. This property is called self-organized criticality, and this is kind of what a landslide represents in a sand pile, is as you keep adding stuff to that pile, you're getting closer and closer to the state, to have this property realized. So in software, software's attracted to this critical point, and the point when you actually want to, Sam Pile landslides is called this critical point, and software kind of, I think, acts the same way. As long as you keep feeding the system, you keep adding changes. You're actually moving closer and closer and closer to this point where the software in a sense has to landslide, and I think that's directly related to the functionality that we add, the changes, and the increase in complexity. Now that charge is a little bit misleading because software's not linear. As we add more change, as we grow our software, it's not on a consistent slope on the way up. I think it works more like this. Whereas you can get away early on, having a lot of change with small increases in complexity, but then over time, it's kind of more exponential. Another way to look at this, and this is actually getting a bunch of different metrics on the compass code base, over time, going through all of its commits, and what this shows is this shows the delta between the size of the code base and the complexity score based on the different metrics that the Rudy library provides. I'm not saying this is scientific, but it was interesting to look at. So the size change is actually the red, and then the blue is actually the complexity delta. So you can see as things happen, you can make small changes in code size and complexity spikes. You can make small reductions in code size and complexity can really go down. Another way that software I think is kind of interesting related to all of this is that it's a system that feeds back into itself, much like the sandpiles kind of system that feeds back into itself. Which really means that the output of today becomes the input of tomorrow. And this is an equation that kind of shows the kind of what happens when you make micro changes over a long period of time. And this is, there's a really small difference here. You've got 0.2 compared to 0.2, 0, 0, 0, 1. So for a long time, you're actually, you're going along your project, everything's going okay, but at some point in the future, some of those small changes, some of the small decisions that we've made kind of come back to actually impact us in bigger ways and noticeable ways. So a lot of times what we end up with is the kind of like, our application is a sandbox and as we're adding stuff to our app, we kind of just put in one giant pile and then over time we end up with this giant kind of highly unstable system, which potentially could landslide at any time. Which a landslide, in my view in software, since it's not naturally occurring, is that's when the system's in like an immovable state where you're trying to work on part of the system and you really can't make the changes that you need to make until you go in there and you clean it up, you do some refactoring. And this isn't the kind of refactoring that happens that you want to do like on a daily basis, as you go, a red and green refactor, it's not that. This is kind of when you just let the system kind of run amok and it got to this messy point and you don't really have an option now. You have to actually go back and clean some things up before you can move forward. So this is what we don't want. What we do want is we want to balance and distribute the set of responsibilities in the complexity in our app so that we end up with these smaller, more stable sandpiles. And sometimes it makes sense to actually break those things out because when you've got completely different sandpiles, you've kind of created a clear boundary and you're not letting complexity from one system really go into another system. Moving back to the chart, this is what I see happens on a lot of projects that we get pulled in on. And there's this vicious stop and go cycle where just out of the gate, your team is just going really, really fast, go, go, go to get everything in there. And you're kind of racing as fast as you can up to the critical point. You don't notice it at first though because you're making a lot of progress. But as soon as you get up there, your system becomes more difficult to work with as the complexity goes up. You get in this vicious stop and go cycle where you're just enough to move forward again and then you're back at it to like, okay, I'm just gonna add a bunch of changes. And then you're stuck again. And then you gotta go back and clean some stuff and then you gotta add some more changes. But at this point, the complexity of the system is overall at its peak. So this vicious stop and go cycle is like the most inefficient way that you can actually work on your project because you're going as slow as possible because the system is so hard to maneuver around. When what we should be doing is, we wanna be the purple line. And just admit that it takes more effort initially to actually come up with some guidelines, some conventions, some things that our team is going to do so that we can actually keep complexity as low as possible over the longest period of time. And that's what our challenge is as software developers. As long as our systems are living and evolving, we're going to at some point get the complexity up to that critical point. It's just a matter of, you know, kind of how long can we hold it off? One exception to this rule is small applications, apps that are really narrowly focused, well-defined and have a clear like end date. And they can get away with things that you can't do in a larger, longer running application. And that's because there's a good chance it's going to stop. You're gonna be done before you ever get up there. So it's not really noticeable. I notice this with a lot of junior devs who have worked on really small web apps and then they come onto a team that's working on a larger app. And they try to take all of these kind of practices that they do when they work on the small app and try to get away with it on the larger app. But they kind of get bit because the way that they're designing the application, the way that they're distributing responsibilities and their code just doesn't work that well for larger, longer running applications. All right, so this equation here, this is actually from the Kokomo 2 cost performance model. I watched a video, Grady Butch talked about this. I thought it was interesting. And he didn't come up with it, but he was talking about it. And the performance is your complexity with processes and exponent times, teams times tool. And the interesting thing here is that complexity is like that's the major impacting factor of kind of your ability to perform on your application development over time. And since process is an exponent, I mean a good process can really dampen complexity, whereas a bad process can really amplify that complexity. So both of those things combined though, it's going to be the major impacting factor I think of, what's that? Yeah, yes, yeah, Steve. I'm gonna update this slide for the next time I give this talk. Steve is right. And Barry Bum who came up with the Kokomo 2 model, I think missed that critical insight. All right, so our goal is how do we keep the software from this critical point? How do we stop building these giant sandpiles that kind of wreak havoc and causing stability and cause frustration? And how do we go with kind of a more of a balance and distributed set of smaller sandpiles? All right, so I'm gonna go to part two, which kind of goes into like practical application, I hope for everyone about how we can actually do this. The other two times I've given this talk, it was a longer talk and I didn't include this, so I'm actually going out on a limb here because I just added these slides this week. So optimize decision pathways or decision making pathways. We've got these in our head, our brain has these. Here's one that we should all try to avoid, but our goal is we're solving a problem, so we're given the problem and a set of knowledge to work with, and this is really bad, optimized decision making pathway. If we can do a quick fix on that problem, if that's possible, that's what we do. Otherwise, we're gonna solve the problem a little bit more and we're gonna learn a little bit more about the problem. But we hit the same thing. If we can do a quick fix at that point, if we can do the minimum amount of learning to just get the quicks fixed in there, then we do that. And a lot of times I think that we don't realize that we've actually fallen into this trap of this is our decision making pathway. Another way to look at this is if we have no problem understanding, one possible pathway is that we're just going to do random things to see what works, see what shakes. Otherwise, maybe we actually have the intentional decision to learn a little bit more about our problem. And at that point again, if we do the quick fix, we might do a quick fix because we understand it a little bit, but not a lot. So we can monkey patch some stuff in there though. Otherwise, if we understand a good amount about our problem, then we can intentionally make, I think, better decisions about how we're going to go about solving that in code. Whereas if we're kind of an expert and we really know what we're solving, we know what to do, we're gonna nail it. And this kind of presupposes though that not just problem understanding, but also solution space understanding that you're technically competent in what you're trying to solve and the tools that you're using. And if you do insert whatever today, then you'll likely do it tomorrow. And the inverse of that, if you don't do it, then you're probably not. So if you're a big refactoring fan, you're probably going to do that on a regular basis. But if you're not, and you don't really apply that on a daily basis, then chances are you're not going to do it tomorrow. Same thing with testing. So if you're a big unit testing fan, automated testing fan, you're probably going to have that be a part of your processes. It's just gonna be part of something that you habitually do. If you don't, then you won't. So Daniel Kahneman wrote this book, Thinking Fast and Slow. And he points out in a number of situations that our brains don't compute probabilities and then choose the best possible outcome when it can avoid it. So there's like the slow part of our brain and there's this fast part of our brain. The fast part, I'm gonna call our intuition. In the slow part, I'm gonna call our critical thinking. So when we actually are faced with anything in life, our brain, if it can, it's going to do something out of our, in our subconscious to suggest a possible answer to a problem. And if that sounds good, we'll go with it. If it seems fishy, then we'll actually say whoa, whoa, whoa, and then we'll expend some mental effort required to actually figure out an answer. So one example, this is a visual illusion. So I'm gonna go to this next slide, but I don't want you to think about it. I just want you to like your initial gut reaction. Just which line looks longest? All right, someone thought about it because like, you know, you spent over a second and I didn't hear anything. So I'm gonna ignore all of your answers because you're all gonna get it right. So which line is longest? None of them. They're all the same length, but my brain can't understand that on its own. My brain when it looks at that, it sees it and I think the third line is the longest. I have to consciously override this. I have to tell myself that like no, I'm lining these up, they're actually the same length. So I think this applies in code. So this code, what would you say? Like any red flags? You feel pretty good about this? Just looking at it? Yeah, so this is pretty, you know, everything is just, you know, it's just, it's not in harmony with each other. It just looks bad. It's easy to spot. It's too, so if I would have, if these would have wrapped at 70 characters, then we would be good in your book. Is that right? All right, so the next one, this one I think is a little bit harder because a lot of people will see this and I don't think they'll see any issues about it. And I don't know if you guys can actually, can you read that? So I don't know, what do you think? Does it look okay? Does it have some issues? Does it have a lot of issues? Does it have one issue? It's got, it's got a private season, season of something. Oh yeah, it's getting cut off. Sorry. So the syntactical typos, if they are in there, which they may be, those aren't the issues. But yes, according to the slide that is typo. So Steve's right, and I think Steve's probably a seasoned person at looking at code and quickly being able to identify. He's not given in to the illusion. But a number of people look at this and their impression is actually like, it actually seems okay. You've got cheap. It describes nicely what it's going to do inside. They're gonna load cheap, so you're gonna find people that cheap. And then you're gonna send cheaps. And a cheap is like a tweet. But it's cheap. And then you go out of these, the private methods and look at those. None of them are overly complicated or super long. So there's nothing that immediately pops out. So a lot of people kind of flies under the radar. And someone who's just scanning the code, doing a quick review, a number of people will have a bad decision-making pathway that gets accessed and this is gonna get a thumbs up. So kind of drawing this out, and this isn't very scientific, there's probably errors that are missing, but I like to look at the dependencies and the degree of dependencies between one object and another object. And this I think is what probably Steve saw, is... So our daily cheaper, it actually does a lot, right? So it talks to the file system. So if you notice in there, we're actually like hard-coded path, we're reading out of the file system. So one thing that sucks about that is that's going to make testing kind of suck. So it also talks to the mobile push library. I think that actually sends the cheaps. You know, we know about the cheaps data structure, so we were loading that from a YAML file. However that's defined, we gotta know about that. We're talking directly to the person. We're also talking to the class interface for a person to find people in their subscriptions. And then it kind of, you know, it also knows about how person and subscription are related and some technical details about how subscriptions work, you know.