 Computers keep changing the world, but their power and safety is limited by their rigid design. The T2TILE project works for bigger and safer computing using living systems principles. Follow our progress here on T Tuesday Updates. So, this twelfth episode of T Tuesday Updates is a special Christmas rant for 2018. This is a take on systems thinking aimed at people in and around computing, which these days kind of means everybody. I think some big problems in society are pretty directly linked to insufficient systems thinking in computing, including the planet-sized elephant in the room, which is our utter inability in academia and industry both to make computers secure in any practical way. I mean, is there anybody willing to step forward and say to me with a straight face that we are actually doing a good job with computer security? I mean, it's not for lack of trying, although that often makes it worse. It's because we've stuck ourselves with a computer architecture that makes security practically impossible. And people don't know this, or if they do they don't admit it, but at the root of it it's computational thinking that produced the computer architecture that got us into this jam. And computation systems thinking deficit is increasingly urgent because computational thinking is gaining attention as something we need to be teaching to our kids or to everybody or something. And the idea is we should teach people how to think like traditional computers, how it's all about problem-solving, proceeding one step at a time, and so on. And I do get that. I mean, I've been computing since punch cards and Fortran in the 1970s. But by pushing computational thinking on its own, we are validating, encouraging, and spreading exactly the lack of systems thinking that, as I said, got us into this mess in the first place. So I want to break that down now. This is mostly stuff I've been saying for years, but here at the end of 2018 I want to say it loud for the cheap seats in the back. So, here we go. Computational thinking is about a line. Systems thinking is about a loop. Computational thinking is about an algorithm in an environment. It's about input to output going from start to finish, from problem seen to problem solved. Systems thinking is about an agent interacting with other agents. It's about exchanges and relationships and endless talking. And it's about homeostasis, the system returning to normal. Computational thinking is about progress change for the better and making new things. Systems thinking is about stability, preservation of the good, and cleaning things up. Now, you might be thinking, this sounds like progressive versus conservative, liberal versus conservative. And to a degree, it is. But to the degree that it is, everybody is both progressive and conservative. Everybody. So please, it's not about what type of person you think you are or you think they are. It's about which situations you or they or anybody judge as good or bad. Everybody wants to preserve their good and change their bad. Duh! With computational thinking, you want to get from input to output efficiently. With systems thinking, you want to maintain the status quo robustly. Now, computer people have different reactions to this idea. On one level, it's, we don't need systems thinking for loops, we already have loops. In the CS1 class, the second programming assignment is all about loops, but it's really not. The second CS programming assignment might use loops, but it's about getting from input to output. When I advocate systems thinking to algorithms, to algorithms people, typically they see no problem. An algorithm is a tool to use within a larger system. So it's a line segment that can be part of a larger circle. Actually, I absolutely buy that. It works the other way, too. A system can be mostly a loop, but still, over time, make a line. The mop goes around and around, but eventually the hallway is clean. And even the most perfectly balanced system isn't completely closed, not completely circular. There's always some kind of input, some kind of output, even if only from high-energy photons to low-grade heat from food to waste. So we can shift perspectives both ways. The line is part of a loop, computational to systems thinking. The loop is a spiral line, systems to computational thinking. Now, each such perspective shift enlarges our focus. Computation, a tool that's part of a bigger system, which itself is used as a tool within some bigger systems still, and so on. Computational thinking talks about problem decomposition and hierarchical structure. System thinking talks about systems of systems and networks. All good. So, if it's all a matter of perspective, what's wrong with the way we're computing in society today? The problem is, if we just focus on the tools and forget the systems, if we say the systems are somebody else's problem, we will, perhaps unconsciously, but inevitably, design things in ways that favor the tool makers and tool users at the expense of the systems makers and systems users. And that's exactly where we are today. The tools are running the show, and they've stuffed it up spectacularly. But it's now also locked in with interdependent design decisions that most people can't even see that we don't actually have to do it this way. Think about it. Traditional computing puts many, many tiny steps all in a row, and the rule is every step has to be perfect, and it's hardware's responsibility to make it so. That's called deterministic execution or hardware determinism. Going down that road makes you a total control freak. Every step has to be perfect, and nothing else can happen, except for the steps that you set. Bigger and bigger programs require that perfection across more and more space and more and more time, and sooner or later, not immediately but inevitably, reliability and security are gone. Now, is such flawless hardware determinism actually necessary to do manufactured digital computing? No, it absolutely is not. But what it does is it makes the software programmer's job a lot easier, and that's the main goal when the tools are running the show. Now, this gets a little nerdy, but I'll stay high level so bear with me. The key features of traditional computer architecture, deterministic hardware, CPU and RAM, program and data equivalence, and universal computation. They all make computers easier for programmers, and they all make the systems using those computers more breakable and unsecureable. At this point, I regret to say, when I talk about this stuff, a lot of computer people just lose interest. They don't know how to disagree with any of it, as far as I can tell, but it seems that typically it's either impossible or just too tedious for them to imagine programming without 100% reliable execution, without arbitrary pointers to oceans of random access memory, without a single controlling threat of execution going from input to output. From their point of view, the computing world is going well enough the way it is. Yeah, they admit computer security could be better, but their code is good, and for the rest, well, we have top men working on it right now. And I don't really want to be mean. I understand that view. I've written plenty of bugs in my programs and occasional security hole, too. But eventually something has to change, and the longer we wait, the worse it's going to be for society to replace evermore and more sensitive data and money and cars and trucks and everything else under direct computer control. Now, I can say that even though the architecture must be fixed, not all programming styles for our current crazy architecture are equally bad. An event-driven program consists of one main loop designed to run indefinitely, that's much more in the system thinking spirit. And an operating system's job basically is to run forever and to withstand whatever problems typically crop up as it does. A lot of time it seems like our apps crash at the drop of a hat. Serious operating systems do much better, but that's only if we exclude all the malicious attacks that all too often succeed against both apps and operating systems. Attacks that, by the way, are made vastly easier by hardware determinism. But just because it's possible to use traditional computer architecture in ways that are less bad, that doesn't mean things are okay or even approaching acceptable for the long run. We're not even close to that. We're really screwed at the moment because we are so dug in on hardware determinism that it's hard to imagine any way forward. But just for a minute here, late in 2018, let's say you're willing to go along with me that more systems thinking is what we need to take the tools down a notch and come up with a better approach to computer architecture. What might that really look like? Well, what does calling something a system really mean anyway? Can we claim any pile of things is a system if we want? Or why not? In a deep sense, a system is about shared fate. We see two things, A and B, as being part of the same system if A and B live and die together. If B can get along without A or vice versa, we're less justified seeing them in one system. In the other direction, all of the things that tend to live and die in sync with A and B will view as parts of the same system. Of course, it doesn't have to be that black and white. Systems can contain things that benefit without otherly depending on other things and so forth. Systems frequently form in nature as relatively independent, but neighboring things come to depend on each other, taking on complementary roles that help each other. Hey, I can make sugar. Hey, I can make CO2. And sometimes death is more like system disillusion, say like layoffs and bankruptcy, where the previous system components don't, at least immediately, all die. Now, what makes humans so special and where we're really going with computation once we get systems thinking properly integrated is that we create systems consisting of interdependent roles to find not directly by physics, but by language. When you accept that job at that company, you get a title plus a set of obligations and rewards that go with the title, all defined, at least in principle, by language in some three-ring binder, some franchise operation manual, or whatever. And in that picture, what the control freaks relentlessly missed but the leader sees is that it's vastly more robust to employ semi-independent agents to interpret and apply high-level language to whatever unexpected facts arise rather than imagining that the job's language specifies everything down to the last bit like computational thinking wants to push on us. For computational thinking, the solution to every problem is more tiny steps, hopefully specified ahead of time, but if necessary, after things have gone bad. For systems thinking, the solution to every problem is more systems of systems and more interactions to find somebody, hopefully on staff, but outside if necessary, that's successfully handled similar problems before. We need both mindsets front and center. We cannot allow either to ignore the other, and yes, it's a big change, but I know we can build new computer architecture to play to both their strengths. The T2 tile project works for bigger and safer computing using living systems principles. Follow our progress here on T Tuesday updates. Happy holidays.