 Today, I'm going to start by telling you a story, in three parts. One. When I started my first job out of college, with a physics degree and some really terrible FORTRAN as my only experience, my new boss handed me two books. One was Carnegie and Richie's The Sea Programming Language, because I was going to be writing sea, and the other was Steve McConnell's Code Complete 2. Both great books, but for today, we're going to focus on Code Complete 2. And specifically, we're going to focus on one of the many good ideas McConnell had there, which is when you have a function which is going to do something, say compute a total from a maximum and minimum by summing over the range between them. Then instead of declaring all your variables at the top of that function, from the total to the min and the max to even the loop index, declare them where they'll be used. Max and min declare and initialize for each computation, total right before the function, and if your language supports it, index the loop inside the loop. McConnell notes that this has a lot of really lovely knock-on effects for how you structure and think about your program. It forces you to think about the scope that each variable should have. And this not only makes it easier to understand your code when you read it, it also makes it easier to make changes to your code. For example, if we wanted to refactor this by extracting total into a standalone function that does the loop itself, we can do that, and it's basically trivial because we've made those changes to our code. If we had left things as they were at the beginning, it would have been much more difficult to see how to make that refactor. This is a trivial function. It's way worse if your function is more complicated. Shrinking variable scope helps us understand and make changes to our code. For the second part of my story. In 2015, I met my favorite programming language, Rust. I'd spent most of my career up to that point writing a mix of C and Fortran and C++, so Rust's speed guarantees, we're going to be just as fast as those, we're very attractive. So we're some of its type system niceties, which I had recently come to appreciate from learning languages like Haskell and Elm. We'll come back to that in a minute. But my favorite part then and now is Rust's ownership system. Ownership in Rust, which is built on its type system, which uses an advanced type theory idea called affine types, means that with just a couple fairly conceptually simple ideas, we can get a high level language that mostly feels like you're writing C sharp or JavaScript while getting that C or C++ level performance. Again, two simple rules. One, every piece of data in the system has an owner and exactly and only one owner. And two, there's no shared mutable state in the system. So we can have sharing or we can have mutability, but not both. We can share the data around as long as it's only readable and the data owner is in charge of this, or we can write to it as long as there's only one thing which has access to it. From those two rules, which yes, you have to work a bit to understand sometimes, fighting with the baro checker is a thing, but you get that nice blend of high level ergonomics and low level performance. There's a reason it remains my favorite programming language. It's because we isolated mutability and now we have control over mutability in the system. We've shrunk the places where data can change in our system from anywhere we have access to the data to only the places that the owner says it can be changed. Part three of my story. As I mentioned a minute ago, when I started learning Rust, I had also recently been learning Haskell and Elm, which are purely functional programming languages. And purely functional programming is when we build our whole system out of pure functions. A pure function is a function which has two properties. First, it only has access to its arguments, which is to say it doesn't have access to any global state and it doesn't have any access to any global functions. It only has access to its own arguments, which might be functions and indeed its return values might be functions, but none of them access anything outside of it, its arguments. Secondly, we embrace immutability. There is no mutation in the system. So those arguments that we hand in and that are the only things we have access to, we can't change them. To produce a new result, we may need to make a copy and then hand back that copy with the transformation applied. And of course, we don't have access to global state, so we can't mutate that, but also we can't mutate anything. This gives us purity in the sense of like a chemical solution where we want to control the outcome so we carefully control the ingredients that can go in. We want to pour in exactly in only the things we want and not mix it with anything else out there in the world. And that's the same kind of benefit we get from pure functional programming. If we have the same inputs, we have the same outputs every single time and we can be assured of that because there aren't any other invisible inputs to a function, just its arguments. We also get the elimination of any kinds of bugs or even just confusion that comes from mutability. If I hand this function an argument, is it going to change that argument from under me? No, because it can't. There's no mutability. It can just hand me back a copy with the transformation applied. And finally, the combination of those things gives us a lovely property, Referential Transparency, which is a slightly opaque name for the idea that we can just substitute the results of evaluating a function for the function itself. You can think of this like math. If I have an equation that includes the term 2 plus 5, then it doesn't matter whether I have 2 plus 5 or 7 there. They're the same thing. And in fact, if I have an equation which has a sub equation which does that, I can still just substitute it. It's the same value. Pure functions because they only have access to their arguments and their arguments are immutable have Referential Transparency. The net of this is when I look at a pure function, this function only needs to understand itself to get its job done and I only need to understand the arguments and the outputs of this function to figure out whether it does the right thing or not, to figure out how to improve on it, etc. In a language like Haskell or Idris or Elm, where every function in your program has those properties, every function in the program gives you the ability to look at this function and not have to worry about that function over there to understand it. Now, there are other complications that come along with this and these languages have solved them. Like, how do I do a console log if I don't have access to a global console.log function? But for today I want to keep focused in on our ability to think about this function right here without reference to that function over there and vice versa. Pure functional programming enthusiasts will often describe this as pure functional programming giving you the ability to reason about your code better. But what does reasoning about your code mean? The term gets thrown around a lot, so let's try to dig in on it a little. I think that reasoning about your code is the ability to understand your code. What will it do? How will it do it? And there are a lot of things we want to understand about our code. Some of those are classical computer science kinds of reasoning. For example, what's the algorithmic complexity of this implementation? How will it scale up or down speed-wise as we have to traverse more data? And the same thing for space usage. How much memory will this data structure use as it grows as we put more and more data into it? We also want to be able to reason about things like changing our code. What code do I have to change and where in my program to fix a bug? Or to improve the performance of my program, whether that's along computer science reasoning terms or along terms of cache locality or things like that. Where do I have to go if I want to add a new feature to my program or remove a feature from my program? What do I have to change? And then the last one of these I'll call out, though I think there are more. This one's the big one, right? Does my code work? Does it do what it should? Does it solve the business problem I'm setting out to solve with it in the first place? Sad to say I have read a lot of code and even sadder to say I have written a fair bit of code where it's actually really hard to understand the answer to this question. Now, as a result of the importance of reasoning about these things, of understanding our code, we've built up a lot of tools and techniques over the years to try to get a better handle on them. But, and this is my thesis for today, I think the key to all of those in many ways, or at least a key ingredient of all of them, is the ability to reason locally to understand this piece of code without having to go understand other pieces of code in the system. Or, as my wife put it when I introduced this idea to her, we want to shrink the radius of what we have to think about to understand our system. So, going back to our first three examples, Code Complete 2's variable scoping guidelines from Steve McConnell, those helped us improve our ability to reason locally about variables and variable scope, and that improved our ability to work with that code over time because we could now understand it better and make changes to it, and that would impact that loop into its own function. For Rust, control over mutability gave us the ability again to understand our system better, we know where data change can happen and can't happen, for refactoring because the compiler will have our backs when we go to try to make a change. If we say, I want to write this now and it's not writable, it's in read-only mode, the compiler will tell us. And by dint of that same compiler telling us, we get rid of a whole bunch of bugs, we get memory safety and a high-level programming language together by decreasing the scope, shrinking the radius of what we have to think about in terms of data mutation. And pure functional programming similarly, combined purity with immutability to give us referential transparency, which lets us understand this function without reference to that function, which allows us to refactor that function and know it won't break this function, and also gets rid of tons and tons of bugs for us by doing that. All of these improve our ability to reason about our code, to understand it, and therefore to work with it by improving our ability to reason locally about it. They let us shrink the radius of what we have to think about. Now, at this point you might be thinking, but is that really so general a principle? You could have just cherry-picked these examples because they fit well with your thesis. I did not. In fact, I'm going to work through a series of case studies here, and I will show you that this is the Holy Grail. It is a significant piece, at least, of innovations we have been building and ideas we have been chasing for the last half century. Let's start by digging into Edgar Dijkstra's famous 1968 paper, 53 years ago, Go to Statement Considered Harmful. Dijkstra's thesis. Our intellectual powers are rather geared to master static relations, and our powers to visualize processes evolving in time are relatively poorly developed. For that reason, we should do, as wise programmers aware of our limitations, our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program spread out in text space and the process spread out in time as trivial as possible. This is about local reasoning. Dijkstra wants us to shorten the conceptual gap between the static program and the dynamic process, between the program in text space and the process in time. Now, Dijkstra over the rest of the paper teases this out in terms of coordinate systems. Those can be textual, like line numbers, or they can be things like the index of a while loop. If you have Go to in your system, Dijkstra tells us, it doesn't matter what else you've done to make your system comprehensible. Control flow constructs? Ha! Functions? Ha ha! Variable names? None of these will help you. While loop index. What does it mean? What does I mean here? Well, there's a Go to in your program, so who knows, because that Go to could drop you right in the middle of that while loop, and then it's a function of how the while loop happened to end last time, and maybe other things too. By contrast, if you get rid of the while loop, if you get rid of the Go to statements in your program, that while loop is comprehensible to you. And the reason is that Go to requires global reasoning. You have to get the whole world in your head, because you have to read the whole program to know what might land you in the middle of that while loop. By contrast, when you embrace structured programming, you get the ability to reason about just that while loop, to reason locally. Because if there are no Go tos in your program, then you can't accidentally end up in the middle of that while loop somehow. It's improved our ability to reason locally about control flow. Now, I wish I could say this is a purely hypothetical thing, but remember how I said I spent the first chunk of my career working with Fortran and C and C++? Those programs were riddled with Go to. It was a bad, bad time. I literally had to understand the whole flow of the program any time I wanted to make any changes. And so the only way to actually make progress there was to work to eliminate every single Go to statement in the program, to turn it into a function, to turn it into a structured while loop, whatever the case may be. And that did pay off. In the end, I could reason about the code because I had gotten the ability to actually have meaningful control flow. But it was painful. Go to murders our ability to reason locally. Structured programming gives us the ability to reason locally. And that goes for one of the other pieces of structured programming advice we've probably all internalized, which is to avoid global mutable state. So in the same way that a Go to causes us to reason globally when it comes to control flow, global mutable state, if I have some object out there that any function can touch and change, that makes me reason globally about data change in my program because any piece of my program can change it arbitrarily at any time. Whereas while it's a little more work to thread it through to just the functions that actually need to read it and actually need to change it, at least that way I can reason about these are the places in my program that actually can change the code. Those same Go to riddled programs had a lot of global mutable state. I spent a lot of time threading them through, but when I was done, I could actually understand where the changes were and therefore where any bugs were or where performance improvements could be made. Structured programming gave us a big leg up, but we kept looking. Object oriented programming gave us a bigger leg up here and encapsulation specifically keeps us on the theme of reasoning about data change. When we had that shared mutable data in our system, even if it's not global anymore, when we pass it around through the system, we have to reason about any function we pass it into because any of them could make arbitrary changes to our data. If we encapsulate our data and wrap it up inside an object which has a few public methods on it, then we've gained the ability to reason not just about any function which touches it, but instead about this class's methods. No matter what function we pass this object into, the only ways that this object's private data can change are through those methods, so we can test it, we can refactor it, we can make changes to it, and those other functions we pass it into won't get broken along the way. This actually goes for the solid principles too, though in this case we're now talking about interfaces, the contracts between objects. So here we can start with the single responsibility principle. This one's pretty straightforward. If this object has a single responsibility and that object has a single responsibility, then when I'm thinking about this object, I don't have to think about the responsibilities of that object. If we design our system that way, we've shrunk the radius of what we have to think about when thinking about any given object. The same thing goes for the open-closed principle. We should design our object so that they're open for extension, but closed for modification. And what that means is that when I'm working on the internals here, well, if I've designed my object this way, who cares how I've been extended? None of those extensions can muck with my internals. And vice versa, I've shrunk the radius of thought to either extension or the internals, but not both at the same time. The Liskov substitution principle says that you should be able to use a subtype anywhere you use the supertype. So if you have an animal, you should be able to use a cat or a dog or a goose. This requires us to think only about animals and therefore it lets us think only about animals and not about the details of cats or dogs or geese. The interface segregation principle says have lots of small interfaces instead of one big one because that way, whatever is consuming that interface, whatever is a client of that interface can think about just its own responsibilities instead of having to think about all the responsibilities in the system. This is powerful for shrinking the radius of thought because it means this client only has to know what this client has to know. It doesn't need to know anything about other kinds of entities in the system. Finally, the dependency inversion principle says depend on abstract things instead of concrete things. And what we really mean there is an interface instead of a specific class. Whether that's a formal interface like in C-sharp or TypeScript or a duck-typed interface like in JavaScript or Ruby, the big idea is if we depend on an interface then we can swap out the implementation. We can change the details on the other side of the implementation. We use this all the time to make testing possible, for example. All of these let us shrink the radius of what we have to think about in terms of interfaces. And this generalizes beyond structured or object-oriented programming too. The actor model, best known in the context of Erlang which runs the telephone systems in the United States helps us to reason locally about resilience or fault tolerance. In a traditional monolithic system we have system-wide failure and recovery. If an exception gets thrown and isn't handled somewhere in your system the whole thing goes down. In an actor model instead we have a bunch of small pieces of our system which can talk to each other by sending messages but they can fail without other pieces of the system failing. So you can say, I know this is a fatal condition for this piece of my program and I can just let it die. And the supervisor can say, hey come back up in a healthy state please. And the rest of the system is stable against that. We have independent failure and recovery. This has improved our ability to reason locally about fault tolerance and about resilience. It goes for types too and how we use them in our program. This is closely related to some of those OO principles but it's just as applicable in functional programming. Let's imagine we have a user class. This has a name and an age and an email address and a state of residence. We'll say this is a United States user. And we want to describe that user. Well we can do that by having a function which accepts a user and returns a string which pulls the user name and age together. Chris is 33 years old. There's a problem here though. We're actually coupled to all the details of the user. When I call describe I don't have any way to know whether in fact describe is using the email address and the state. It's not, but I don't know that. I can't make changes to those without checking the implementation of describe. And even inside describe. This is a simple function but if it were a little more complicated I could accidentally end up coupled to the email address when I didn't originally mean to. It makes data coupling hard to reason about. By contrast, if I say hey this describe function takes any object which has a name and an age which are a string and a number respectively now I've eliminated that data coupling. As a caller I can say here's a user because I know that it doesn't depend on the email or the state of residence or anything else. And internally I'm now protected against accidentally depending on details I didn't mean to. In sum we've moved from reasoning about a whole class to reasoning about structured data from the user to just those particular things we care about. We've improved our ability to reason locally about data coupling. Last but not least let's talk about auto tracking. Seems like a good way to wrap things up at EmberConf. In EmberOctane with auto tracking there's only one way to have reactivity in our system with tracked and the primitives it's built on. That's huge because in Ember Classic and in fact in any observable based system reactivity is actually a function of the consumer of our data. So with Ember Classic for example Classic computed properties would use the computed decorator to determine what things we wanted to listen for changes on and then as long as we used this.set and this.get to create changes and read changes that property is reactive. Anyone anywhere in the system could do that if they had access to the data and because we had one-way binding two-way binding when we pass data through our system all of it was available for this kind of access. This meant that if you wanted to understand the reactivity of any given piece of data in your system you had to read every place that used it or anything derived from it because the color isn't the consumer is in control of the data. They get to say, haha this is reactive now and I would say this is about the worst thing I can imagine in terms of defeating our ability to reason locally about reactivity except the second part is even worse. The second part is that we also had observers and observer-like life cycle methods. These chain on top of that original problem and allow us to create arbitrary further pushes of reactivity into the system. So I watched that property I've made it reactive by marking it as a dependent key and using this.set to push into it and now whenever it changes I can trigger further this.set. If you want to understand therefore the full reactive flow of data through an Ember classic system or anything that looks like this you have to track all of the data that is involved all the way through the system. No matter how implicit that transition is no matter if it goes through a did receive adders and therefore you don't even have a computed property key to check you have to follow it. This infamously could get us into infinite loops because you could set a property over here call it did receive adders here which sets a property there which triggers an observer there which sets the original property again. Oh no. This was terrible to be perfectly honest. I liked a lot of things about Ember classic but this was terrible and that's why we've moved to autotracking because it puts the owner of the data in control of reactivity and this is huge. When we mark a piece of root state as reactive with tract that is what makes it reactive. Nothing else in the system can make it reactive. We no longer have this kind of arbitrary reactivity where consumers are in control. The owner of the data is in control of the data. Second, because we've got real one-way data flow because Glimmer components don't have two-way binding of their arguments and because we've gotten rid of observers and observer style life cycle hooks we now have a situation where we can be confident that there are also no arbitrary pushes of reactivity further into the system. We've improved our ability to reason locally about reactivity. Reactivity happens where things are tracked and that's it. Thinking back to Rust, this should sound familiar. Rust gave us control over mutability and that led us reason about the rest of our system because we knew where data could change. That's true here of reactivity. Now we know where data can change in reactive ways that matter to the rest of our system. Or pure functional programming. We embraced purity and mutability because they gave us referential transparency everywhere else. We could reason about this function without having to think about that one. Control over reactivity does something very similar. By isolating reactivity to tract properties we've made it so that everything downstream of those is a pure function, is referentially transparent. And to bridge into the parts of the world that aren't we have constructs that explicitly bridge into going and doing something with the DOM or handling events coming from the DOM. We've gotten basically referential transparency all the way throughout our system because we've isolated reactivity. We've shrunk the radius of what we have to think about when it comes to reactive data. So to sum up, it's really important to be able to reason about our code. We need to be able to understand what our code does and how it does it. But to do that we have to be able to reason locally. We have to shrink the radius of what we are trying to understand for us to be able to comprehend it and therefore to work with it to make changes to it. So from affine types to the actor model from structured programming through object-oriented programming all the way up through auto-tracking and the kind of pure functional programming model that it lets us embrace in an Ember octane app. Local reasoning is key. Our big takeaway should be that we need to keep leaning in on local reasoning. We need to shrink the radius of thought. We need to keep it local. Thank you.