 I'm really happy. So I've been trying to see a talk by Glenn Vanderberg for years, and I always miss him. So I figured the only way I would ever get to see him is to get him here at my own conference. So I wrote him an email and invited him to come speak. He's like, I'm not speaking anywhere this year. I have too much stuff to do. And he says, but it's go, Garouco. How can I say no? So he came here because of all of you. So thank you very much for getting him out. So Glenn has been doing Ruby since 2000. He's one of the old guard. And he's worked at Infoeather with Rich Kilmer and Chad Fowler and all those folks. And I'm just so happy to have him here today to talk to you. He's from Dallas, Texas. So give him a San Francisco welcome. And I currently work for a living social. We're hiring, you may have noticed. So Josh, thanks for letting me do the antipanultimate talk of the conference. I have to make a disclaimer right from the beginning. I'm here to talk to you about something that I'm not an expert about. Like speakers sometimes do, I chose my topic and volunteered to do a talk on this topic because I wanted an excuse to study these things and learn about them and get a little better. So I don't claim to have all the answers. This is kind of me learning in process. I hope you learn something from it. And I would actually really enjoy email and feedback, glenn.vanderberg.livingsocial.com or Twitter GLV. If you have any insights or want to build on this at all and can teach me something, I fully expect that'll be true. So we programmers have some biases and some ways of thinking and some habits of thinking that are often very good and serve us very well. We love for things to be simple. We love simplicity. We like to look for code that's as simple as possible to do the task that we need it to do. I've spent most of my career emphasizing to coworkers and people I was teaching and mentoring the importance of keeping things simple, if at all possible. Simpler code, simpler design, simpler solutions are easier to read and understand. They're easier to debug and extend and change. It's usually harder to write simple code, harder to build simple designs or come up with them. But pays off in the end. Because of the nature of what we do, we also tend to think in black and white. Our world is binary. The answer to questions are true or false. Solutions are right or wrong. They're either work or they're broken. Our work teaches us to think this way because those are the concepts we work with in code. And that's a good thing. But it carries over into other parts of our thinking, when maybe things aren't so black and white. Now to be fair, this bias is not unique to programmers. Everybody tries to oversimplify and think in black and white terms. And dealing with vague things is kind of unsettling. Programmers like complete solutions. We want a solution that covers every edge case, handles every boundary condition, completely solves the problem. And the holy grail for us is finding the algorithm or design that is both simple and complete. It handles all those edge cases and boundary conditions and special cases without having special cases in the code. Daniela told us a mathematician's joke yesterday. So I'm going to illustrate this with a programmer's joke that I heard a while back. No, sorry. A zoologist, a physicist, a mathematician, and a programmer are on a safari. And they're standing in the back of this vehicle and they see this herd of zebras flowing across the steppe. And they're standing there looking and taking this in. And the zoologist all of a sudden goes, my god, that zebra is spotted, not striped. Nowhere in the history of my field has anybody ever recorded and discovered that spotted zebras exist. I'll be famous. And the physicist says, now, hold on there, guy. Don't get ahead of yourself. At this point, all we really know is that at least one zebra is spotted. And the mathematician shakes his head and goes, he's sloppy, thinkers. The only thing that the evidence tells us at this point is that at least one side of a zebra, of one zebra, is spotted. The programmer, of course, is standing in the front of the vehicle going, oh, a special case. Now, when I first heard that joke, I said that joke was clearly made by somebody who's not a very good programmer. Because a good programmer would have a different reaction. A good programmer would say, aha, spots are generalizations of stripes. And this turns out to be true. If you're writing shading algorithms, you can write one where you tweak the parameters a little bit and it might generate spots like a leopard or more like a cheetah or stripes like on a zebra or a tiger or something like that. And so that's the holy grail, right? One algorithm that if you change its inputs, it handles all the rich special cases you see in the world. We like simplicity. Sometimes we also like complexity. If it seems like there's a bound to it, if it seems like it's possible to get your mind wrapped around it, we like that, right? We like the challenge. This is why we have languages like Scala. It's also why we have obfuscated coding contests and bad coding contests and things like that. So we had one of these inside living social a little while ago. And the number of people who said, oh, this is great. I can write some complicated code and took on that challenge. This is my solution. It's a YAML parser, a purely functional YAML parser controlled by regular expressions. It's disturbing how proud of this I am. And Tony was here a minute ago. Where are you? I'm still a little bitter that Tony beat me in the contest, Tony Arsiri. But in any case, we like complexity sometimes, but not all the time. There's a certain kind of complexity that doesn't seem like it has a bound to it. It doesn't seem like we can ever really grasp it. There are problems that have no clean solution. Maybe sometimes they aren't even computable. Maybe they're context dependent, but we have no hope of ever really knowing or pinning down all of the little bits of context that we need to solve this problem. Maybe, in some cases, even the best solutions we can think of have severe downsides. And we're paralyzed trying to choose between these flawed solutions. And we tend to recoil from these complex problems. And if we are not able to completely run away and just ignore them, often, unfortunately, we pretend. We oversimplify. We act as though the problem was simple. We argue with each other as if the problem were simple. And we decide as if the problem were simple. And I've got to be honest about where the motivation for this talk comes from. It's an election year. And I was earlier this year. Josh had asked me to come speak. And I was thinking about what I would talk about. And I was really frustrated about how in our sound bite and anger driven political culture, we oversimplify every problem and don't seem capable as a culture of dealing with the full complexity of difficult problems that face us. And then I thought, that's not just a characteristic of political culture. Programmers do the same thing. So I want to talk about the ways we oversimplify and the ways we jump to decisions and make poor decisions because we don't face the full complexity of the problems ahead of us in some cases. And then I want to talk about some techniques, some habits of mind we can cultivate to help us not recoil from complex problems, but really think about them and try to have productive discussions about them and reach some kind of more sane solution or compromise in the face of complex problems we face as programmers. So one way that we oversimplify is that we concentrate on primary effects and ignore second order effects that happen indirectly because of the solution we put in place. As one example of this, a friend came to me a few years ago and wanted me to build a software system for him and his business partner. The business partner was actually not the business partner, but a friend of theirs was the leading patent holder at Dell Computers. This was when Dell mattered. And so he had, working with the lawyers at Dell, sort of systematized a process for building and writing and submitting patent applications that were highly likely to go through the process and be approved. And they had succeeded in making that process streamlined and cheaper. And he thought, we could build a product out of this and help people submit successful patent applications. And so my first objection to taking part in this project was the world doesn't need more patents. But I pushed back on them and I said, this isn't going to work the way you think. Because this is not a mechanical process you're trying to game. It's a human process. The patent office is comprised of human patent examiners and managers, and they are already overworked and underpaid. And if there's a product that streamlines and accelerates and funnels more patent applications into this process, you know what's going to happen? We're going to change the process. And it's a great example of ignoring second order effects and assuming that the system you're trying to change is not going to push back on you. Another totally different example of this is I'm old enough to remember when Java started. And there were then, and still are, quite a few people that just really hate the idea of garbage collection because it's slow, right? And they perceive the slowness because they see garbage collector pauses and things like that. And they think about what has to happen for this to all take place, going and sweeping through all the garbage in the system and all this stuff. But what they ignore is the fact that all that same kind of bookkeeping has to happen in manual memory management as well. And it's just that the cost is spread out a little bit more, a little bit differently. And certainly the performance profile of garbage collected systems is different and may not be appropriate for some types of systems. But it's not slower because all the same kind of bookkeeping has to happen. And there are all kinds of second order effects. If the garbage collector controls all of this stuff, it can make interesting simplifying assumptions and it can change the way allocation happens so that even though collection is not significantly faster or slower, allocation is actually much cheaper. So you end up with a net win from the whole system. Sometimes these evaluations are really complicated and there are second order effects and you need to factor those into your equations. In addition to ignoring second order effects, we sometimes forget secondary benefits. My favorite example of this is test driven development or unit testing in general. I've done a lot of teaching of that skill and I've noticed a pattern which is TDD has several different advantages, several different benefits that you get out of it. But people aren't good at dealing with equations that have so many outputs, right? When you teach TDD, people tend to zero in on the benefit or the effect that makes the most sense to them and that's what it's about and they ignore the rest. And the interesting thing is if you just choose one benefit of TDD, it's not necessarily worth it. The case for the cost effectiveness of TDD, if you just choose one of those benefits, is not a slam dunk. But if you remember all of them put together, it is a slam dunk. And I kept seeing this over and over again. Well, you know, there's cheaper ways to catch errors. But what about the other benefits of TDD, right? So we tend to focus on the one benefit that seems the most compelling or the most interesting or the most sensible to us and ignore the others. Often we're asking the wrong question. Sandy in her talk yesterday had a great example of this. She said, people ask, what object should know this? What object should have this bit of knowledge? And that's a perilous question because it assumes in the form of the question that one of the objects should, right? And the answer, well, one that hasn't been written yet is not likely to occur to us. But she said, let's recast the question as what message should I send? OK, what message should I send? And you write the code that way. And then it might be clear that you need to write some new class to receive that message. A few years ago, I was working for a large stodgy organization. I was in there helping them sort of hoist themselves into last decade. Charlie made a joke about AS 400s yesterday. Yeah, I know what that is. And so this big development group in this big stodgy organization and a lot of the things we take for granted, source control, testing, automation, repeatable processes, things like that, we're new to them. And along the way, I found out that the ops group, the policy they had put in place for us deploying a new version of one of our systems was that we had to provide them, this is so crazy, we had to provide them a war file of source code and a script for how to turn that source code into the runnable system, how to build it. And when I say script, I know what you all are thinking, but they wanted a document, like here are the commands you run to do this. And I just boggled at this and I got mad at the ops group for being so crazy. I said, do you realize that this process guarantees that we never can deploy what we've actually tested and started going on a crusade about this? And the ops group was curiously resistant to changing it. And the question I was asking was, why are you so stupid, basically? And all of a sudden, I realized I was asking the wrong question. I'm assuming that they are stupid, but it's better to start by assuming they're smart and they have some reason for this, what could it be? And it turned out, once I started asking the right questions, productive questions, it turned out that the ops group had been told by the disaster recovery initiative that they were, well, and also for legal reasons, that they were on the hook to be able to provide the source code for what was running in production at any given time. Now, if your development team knows how to use source control well and how to tag and package releases and has that all automated so it's consistent and it works, then this is great. The ops team can delegate that responsibility to the development team, and you're in good shape. But they knew they couldn't trust the development team to do anything. So the only way they could fulfill their responsibility to be accountable for what source code was running in production at any given time was to make sure they got source code and built the system themselves so that they could save it and know what was in production at any time. My favorite example of this is one that I've been spending a lot of time thinking about for the past five years or so. Ever since early on as a programmer, I've heard people saying, why can't programming be more like engineering? Why can't we grow up and learn to use formal methods and math and discipline and be more like an engineering discipline? And for a long time I was kind of like, yeah, that makes sense. And then I noticed that that didn't match up well with what I was learning and about programming and how it worked in the real world and it just seemed impractical. And then so for a long time I was part of the crowd that was saying, no, no, no, we shouldn't be more like engineering. It's engineering is not a good metaphor for programming. It doesn't work that way. But a couple of years ago I realized that why can't we be more like engineering? Was the wrong question to be asking. The right question is, which kind of engineering are we like? And it turns out that most of the people asking that question for all those years had in mind a caricature of engineering that was almost entirely built around civil engineering and large-scale structural engineering, big materials-heavy disciplines where experimentation and iteration towards a design are prohibitively expensive. But there are other branches of engineering, like electrical engineering and chemical engineering and industrial engineering that are much more like software. The processes they have are more empirical and experimental and iterative. And we were kind of blocked for a long time from really understanding that because we were asking the wrong question. We have a habit of oversimplifying by thinking in black and white. Or as my friend Neil Ford calls this, we fall prey to the sucks rocks dichotomy. Everything's got to be, yeah, the greatest ever. This totally rocks. No, that sucks. It's worthless. You see this kind of argument going on in arguments between static and dynamic typing people and between OO and functional programming people. Yehuda has given me permission to use him as an illustration of this. A couple of years ago, three years ago, something like that, at Lone Star Rubicon. I was sitting there at the table with Evan Phoenix and Evan built Rubinius and he's obviously a big VM geek and I'm sort of a closet VM geek. And so that week, about three days prior, Google had released the V8 JavaScript VM. And so Evan and I had both promptly downloaded the source code and we were there looking at it and talking about it. And Yehuda walks into the conference, just arrived and sat down at the table with us and started unpacking. And he kind of overheard our conversation and he said, are you talking about V8? And we said, yeah. And he said, that has a bug. And that's the reaction we have so often. We have a tendency to dismiss a new idea or something and you know, Yehuda's way smarter than that. But we have a tendency to dismiss something, a new concept or a new idea or a new entrant into the contest or whatever as soon as we find one flaw with it. But it doesn't have to be perfect to be good and useful. Daniella kind of touched on this in her talk yesterday. People reject TDD because they have a mistaken view of what it is and they want it to be mathematically strong and they say, TDD can't prove that a system is correct. All right, of course it can't. It's not a proof, that's not the point of TDD. It helps you establish and gain more confidence in the correctness of a system but it's not a proof. Sometimes when we're arguing, we take strong positions for rhetorical purposes and then we forget and we take our own rhetoric at face value. In the early days of extreme programming, there was a, which is kind of the granddaddy of agile processes for those that you don't know. And it consisted of 12 practices and there was this fairly dogmatic core of supporters that were like, you have to do all 12 all the time or you can't call it XP. And in addition, it's probably crap and it won't work. And for the most part, these people knew what they were doing. They didn't really believe that. But they faced the continuing pattern of people and they'd say, well, here's XP, you do this, this, and this, and this. And somebody would say, well, we could do A, B, and C but we can't possibly do D. That's ridiculous. In our organization, we just couldn't do that. And they started kind of being dogmatic about this and saying, well, no, you have to. Even though you don't have to, but they'd say, you have to. And what they'd find is that a lot of those objectors who'd say, well, we couldn't possibly do D would say, oh, well, maybe if we did this and talked to them about that, maybe we could do it. So we'll give it a try. So by sort of taking this artificially strong rhetorical position, they were able to break down some barriers and get people to try some things that they initially would have rejected just because it seemed too hard. To some degree, them and to a greater degree, a lot of the acolytes tended to take this rhetoric too seriously and start to really believe it. And the first version of Kent Beck's XP book is very dogmatic and prescriptive and everything else. And it took a while to recover from that. And, but eventually they kind of realized, oh, okay. Yeah, that was a bit silly. And the second edition of the book is much more sort of recommendations and flexible and everything else. And they're not nearly as dogmatic as they were, but they went too far taking their own rhetoric too seriously for a while. We tend to ignore context. An example I was talking about the other night at the party with some of the Square guys. At Living Social, we deploy very, very frequently. And the guys at Square were like, yeah, we do that with some of our systems, but the parts that are taking people's money and making sure that, and that's the right answer, right? But a lot of the times, and so I'm not using them as an example of how not to do it, they're doing it exactly right. But a lot of the times, we tend to make snap judgments. They ought to be doing it like we're doing it and ignore the context, the additional risk, the additional importance of what they're doing that cause them to be a little more conservative about some of the ways they develop software. And finally, sometimes we can't pretend the problem is simple. And we just throw up our hands and give up thinking about it seriously. And we resort to magic. What do I mean by magic? Well, you know, some people are just good designers. No knowing why, we just have to find some of them. He just doesn't get it. Whenever I hear somebody say, he just doesn't get it, what that tells me is, you haven't taken the time to understand his objections. That just feels wrong. I don't know why we shouldn't do it that way, but it just feels wrong. Or we should do that because it's so beautiful and elegant. What do those things mean? I'll come back to that in a little bit. But I don't wanna just paint a bad picture of how we're sloppy thinkers and our brains like simplicity and we recoil from complexity. What I've been trying to learn through studying this is that there are useful techniques for dealing well with complexity so that we can make progress and make smart decisions even if we can't do a full thorough analysis of a situation or we have weak evidence, low quality evidence for our positions, or things like that. Examples, we should learn to seek incomplete solutions. We have this bias for complete solutions but we should learn to seek incomplete ones. I worked a while ago with a businessman who has made a career out of taking on problems that all of his programmers say, well, that's an insolvable problem. And he says, well, I bet we can come up with a way that will be an 80% solution for 20% of the effort. A good example of that, he likes to boil things down to three simple rules. And here's his three simple rules for weight loss. If you wanna lose weight, never eat white foods. Never eat in front of the TV. Always park in the most distant parking space from where you are trying to go. He said, these rules are not optimal. It might not be the fastest way to lose weight. Cutting out white foods means cutting out cauliflower, which is not bad for you at all. But keeping the rules simple means that people can obey them better and remember them and more likely to stick to them. It's an incomplete non-optimal solution that it works pretty well because it works with the way people think. You should seek heuristic or probabilistic solutions that might not give you great answers in every case, but help you quickly zero in on answers in a lot of cases. Daniela gave this quote from George Polia yesterday. We need heuristic reasoning when we construct a strict proof as we need scaffolding when we erect a building. Heuristic reasoning is also useful when we can't or really don't need to go as far as a strict proof, but just to help us get to an approximate answer. Other aspects of incomplete solutions? I mean, have you ever thought about the fact that git depends so heavily on the uniqueness of a shah hash for a bit of text? We know that there can be multiple different pieces of text that have the same shah. It's just that it's so mind-bogglingly unlikely that it's worth trusting that it won't happen. My friend Stuart Halloway has called GitHub a globally distributed attack on shah one. A precursor of that, the first system I encountered that sort of relied on this probabilistic, yeah, bad things could happen, but they're so unlikely we don't have to worry about them was a network file server that was a part of plan nine. How many of you have ever heard of plan nine? So there's a network file server in plan nine called venti and it was a block addressable network file server and a file was represented as a list of block addresses and so then you would go and get the block. The block address was the shah of its content and this meant that a lot of files change in small ways, especially down toward the end of the file and so tomorrow's version of the file might share all the same blocks except for one for today's version of the file. So they could have this backup system that had a complete snapshot of the entire file system every day for years and years and years but it would only grow fairly slowly because all those different versions of the files shared the same block. The address of the block was the shah of its content. Obviously that can fail but the probability of it failing is so low that it was a sensible thing to do. I read that paper and that was my introduction to that kind of design and I called it the infinite and probability disk drive. So we should think about how much failure is acceptable. How much likelihood of failure can we accept? We should exploit power law distributions. I expect Charlie will perk up at this. A lot of what Charlie does making Ruby fast, JRuby fast probably exploits power law distributions in the code. We tend, anybody who's had like a basic statistics class knows about normal distributions and bell curves and we tend intuitively to think that probability distributions work that way and that doesn't often give us a lot to work with but power law distributions are everywhere in our code, in our customer base and their preferences. And this is also known as the Pareto principle or the 80-20 rule but of course 80-20 thing just depends on where you draw the line between the gray and the white stuff. And if you can find out that part of your system obeys power law distribution, maybe you can find a really good solution for the vastly more common cases and deal with very non-optimal solutions for the small number of less common cases. You can find good solutions for the vast majority and live with weak solutions for others. If you wanna deal well with complexity, train yourself to always think in terms of costs and benefits and risks not in terms of right and wrong. Not long ago in a conversation about technical debt, one individual said, so look, come on, is the technical debt keeping you from doing stuff? That's the wrong question to ask. Of course it's not keeping you from doing stuff, you can still get stuff done. What it does is make those things much more costly and much more time consuming and much more risky to do than they would be if you didn't have all that complexity and technical debt. You should always be thinking and asking questions in terms of costs and benefits. Yesterday in Sandy's talk she said, what will it cost me later if I don't change this now? That's the right way to think about the problem. You need to also incorporate the idea of future costs and future risks based on today's decisions. And this even applies for things like preventing terrorist attacks, right? We naturally tend to wanna think in terms of right and wrong and good solutions and bad solutions. And we especially recoil from something like that when it sounds like we're putting a monetary value on human life. But if you take the argument all the way to the other end, obviously there's a limit how much we want to spend to prevent those kinds of things. We can argue about what the limit is or how high it is or what secondary bad effects we're willing to put up with from that. Let's have that conversation. Let's don't say whatever it takes, right? Because we need to be thinking in terms of costs and benefits. Exploit emergent phenomena. When we bog down thinking of how complex things are, it's because we're trying to think of a way to control every aspect of it and make sure we get the solution we want. But we should learn about and become comfortable with the idea of emergent phenomena where you have simple rules that reinforce each other or simple flawed measures that back each other up and lead to a larger scale acceptable solution. This is hard to do. It's not always easy, but there are some tips. You can look for small scale rules that seem to exert influence at larger scales. Sandy's talk yesterday was a big example of this. People talking about good OO design have been talking for a long time about big high ideas like the dependency inversion principle and things like that. Sandy presented a very small set of localized small scale decisions you can make that if you tend to follow those things at the level of methods and lines of code, you can often end up generating a much better design than you would otherwise. Instead of trying to control everything that could go wrong, optimize in favor of facilitating things getting done and having good feedback loops so you catch problems quickly and can deal with them. Often a defense in depth strategy can work where you have one flawed practice that catches some things and another thing backing it up that filters through and catches some of the things that make it past the first one, sort of a sieve style operation. In my research into software engineering, I kind of learned that that's the way extreme programming really works. You have a bunch of different practices that gather feedback and make decisions at smaller and larger scales of your system and mistakes that make it through the small scale have a chance to get picked up at any of these other scales along the way. You might also try to just take the time to do a deeper analysis. Sometimes things that seem too complex to be reasoned about, you actually can find some deeper patterns and understanding if you roll up your sleeves and dig in. Try to categorize entities, costs, risks, and relationships in the problem space you're looking at. Look for commonality or distinction. Look for simplicity hidden in the complexity. The example I just showed you, that all started when at lunch one day, David Thomas asked this question or made this comment, if you built a piece of software that was as tightly coupled as extreme programming, you'd be fired. And he was talking about the stuff in the extreme programming book that said, yes, this project is, this practice is obviously flawed, but it doesn't matter because it depends on this other practice that backs it up and makes up for its weaknesses. And if you draw a diagram of all those dependencies, it looked like this. So I can see what Dave was talking about. But at the same time, I was troubled about it because I liked that structure to some degree. Something about the redundancy of it seemed like a good idea to me rather than seeking perfect practices, use flawed ones that reinforce each other. So I started trying to find simplicity in here. And I tried a number of different things to do, dragging it around in graphic editors, looking for patterns. I finally ended up kind of laying things out in a circle and trying to move things closer together that had a lot of connections. I ended up with this, where there were only a few connections that went very far across the circle. And that led me to start finding some practices that were alike and some that were different. Why isn't this moving? And teasing it out so that I had some practices that had a lot in common and others that were very different from those. And it led me to being able to make some structure out of that. So don't give up too easily. Seek the roots of your intuition when you're just saying, I can't explain why I hate that, but it's ugly, it feels wrong. Steven's talk, I don't know where Steven is, but Steven's talk yesterday, he talked about how experts have forgotten what they learned. And we work by intuition a lot of the time when we're experienced programmers, but that intuition is something that builds up into us from experience. And we have little rules of thumb that operate at a level that we're not aware of. But if you're not satisfied with the answer of just it's intuitively obvious or that's ugly or that's beautiful, if you force yourself to dig back in and try to understand where that intuition comes from, you can find out what's going on and explain it to people. And that helps you teach, it helps you reach good decisions, it helps you get over impasses when you're arguing and your intuitions are at odds. And it can help you discover that sometimes your intuition is leading you astray because it's missing something in this context. Finally, spend some time studying what are called wicked problems. Have you ever heard of wicked problems before, anyone? Oh, cool. So wicked problems, this is a kind of a formal thing. They were first described by Riddle and Weber in 73. It comes out of like civic planning theory and stuff like that. You can Wikipedia it, there's a lot of resources. Where I was introduced to it was the weblog of Carl Schroeder who's a science fiction writer from Canada who I highly recommend reading not just his weblog but his books, but he wrote a series of blog posts about wicked problems that we face today. Wicked problems are characterized by a bunch of different things. Here's a sampling. There's no definitive simple formulation of a wicked problem. Wicked problems have no stopping rule. So like, okay, we know we're done now. We know we've solved it now, well no. Solutions to wicked problems are not, yes this one's right, this one's wrong, or this one failed and this one succeeded. It's, well it made the problem better or made it worse. You can only solve them in degrees, make them better or worse. There's no immediate or ultimate test of a solution. You might have to wait a while to see if you've made it better or worse. Solutions are one-shots. There's no way to experiment. You have to try it full scale and by the time you know if it worked, everything's changed now and you have to try something else. Almost all wicked problems can be described as a symptom of another wicked problem. They're very complex things that we face in economics and civic planning and politics and health management, epidemiology. There's a whole raft of these things and if you want to get good at dealing with complexity in our world, which pales in comparison to this, spend some time studying the techniques and the ways of thinking that people have come up with for trying to tackle wicked problems. So just to summarize, yes, seek simplicity, it's good. Don't overcomplicate simple things. But at the same time, when faced with real complexity, grasp it head on. Don't oversimplify things because that just makes it worse. And be prepared with learning techniques and habits of mind to make smart decisions in the face of complexity. Thanks very much.