 The T2tile project is building an indefinitely scalable computational stack. Follow our progress here on T Tuesday updates. So this is a special Christmas Eve rant for 2019. It's called Here's Why Traditional Computer Security is a Lost Cause. Now, I've been saying stuff like this for a long time, like in the special Christmas rant last year, and there's a small but growing community of folks that get it, and that's great. But an awful lot of people in academia, industry, and society at large absolutely don't get it, and that's a real problem as we as humans continue to develop and deploy ever more hopelessly insecure computing devices that we connect to everything from front doors to job applications to the steering wheels of mobile battering rams. And sure, people are aware of malware and spam and identity theft, and they think, yeah, a computer security attack is an aggravation, and I sure hope it doesn't happen to me, but it's a hypothetical aggravation unless and until it does, whereas all the supposed computer security defenses, so the passwords and keys and factors and fobs are aggravating all day long and you can still get caught, and in any case, they figure, it's certainly not the worst thing that could happen, and I've got plenty of things to worry about. The problem is that view acts like the computer security problems we see today are about as bad as they're going to get on the one hand, and that there is no fundamental way to make things any better on the other hand, but both of those hands are wrong. As far as I can see, the main reason computer security attacks aren't yet bigger is because people are still figuring out how to monetize bigger. Ransomware isn't close to done, it's moving on to bigger targets and spawning a whole crime services ecosystem, and don't look now, but the hopelessness of computer security is threatening to make the promise of cryptocurrency go up in a puff of smoke like the exploding safe on the train in the old Wild West movie. And again, none of that would matter if there was no way to make anything better, but there is. But getting to it begins by admitting the real cause of the problem, which isn't nasty criminals, clueless users, lazy programmers, shipper die managers, do be evil corporate execs, three letter agencies, or tin pot dictators. The root cause of the problem is the architecture of digital computation, central processing unit, random access memory, and the implicit contract between hardware and software called deterministic execution or hardware determinism that underlies it. So last year I talked about computational thinking versus systems thinking because that explains a lot of what goes wrong when traditional computing is incorporated into real world systems. And that's super important for setting the stage, but I feel like I didn't really deliver on the detailed argument why traditional computer security is a lost cause, so that's what I want to do right now. And again, a warning up front. This is not an easy drop in solution to computer security. We're not talking about some secret opt out setting buried in your phone, not a plug in for your browser or your spreadsheet, not some monthly antivirus subscription. Such security mitigations might be a little helpful sometimes, but to be honest, mostly they're just to give you an illusion of control or to keep you calm by rearranging the deck chairs on the Titanic or to sell you razor blades that keep getting dull whether you use them or not. Now when the architecture is wrong sooner or later you have to build new buildings. Now that sounds like a lot of work and it is, but it doesn't mean we have to start completely from scratch. We can still build on the low level components, all the bricks and 2x4s and nail guns of digital machinery, but with new blueprints about how to put them together and how the resulting machine will work. The problem is right now we don't know what kind of blueprints we want. People look to biology for inspiration, neural nets, spiking machines, slime mold computing and all that's fine as far as it goes, but it's also kind of imitating the surface forms of living systems rather than their core principles which are robustness, approximation and interconnection rather than efficiency, correctness and isolation of traditional computing. So what I want to do here is try to explain this stuff as plainly as I can from as close to first principles as I can in not too much time. So here we go. The digital computer is based on simple two way distinctions, those famous bits, binary digits zero or one. Part of what makes a bit so handy is that it's easy to make out of physical stuff, using voltage or a water level or a bead on a stick or whatever. To be a decent bit we only need three things. First the stuff has to have to clearly defined and well separated states or positions or values that we can associate with zero and one. We have to be able to switch between those two states. And third we need some kind of restoring force, some kind of error corrector that will keep an eye out for any and all noise or variation in those represented bits and restore the values back to one clean extreme or the other before there's any appreciable chance of it actually crossing over to the other side. An amplifier that makes up more up and down more down is perfect for that job. Now in the land of high-fi audio, an amplifier is a big deal and getting a clean powerful linear amplifier can be pretty expensive and people can get pretty religious about them. But for digital purposes amplifiers can be incredibly cheap. They don't need to maintain the subtleties of the input signal. To the contrary their job is to smash out any variations in the input signal other than what counts as a zero or a one. And the architecture of digital hardware puts amplifiers everywhere. Every digital circuit component is continually recomputing its function like a not gate perpetually outputting the opposite of its current input. But every gate is also an amplifier and the amplifiers are not just in the control processing circuits. In traditional static random access memory, SRAM, there's an amplifier for every single bit. The more economical DRAM, dynamic RAM has a group of sense amplifiers that are applied to refresh different groups of bits sequentially. But the idea is the same. There is explicit local circuitry that is continually restoring the system to a pure digital state. And it's that relentless signal amplification, that endless error correction that's happening all through the machine that allows digital hardware to guarantee deterministic execution. Now guarantee is really in quotes because it's all relative to how big the computer is and how many steps the program needs to do and how well the environment is controlled as required, for example, to get rid of excess heat. But that's the big picture. Digital hardware is continually and actively suppressing errors throughout the machine, but that extreme robustness ends at the software level. The thinking is, hey, hardware is guaranteeing to store flawlessly whatever I tell it to. So as a digital computer programmer, my goal is to do the minimum I possibly can to get the job done because that will be faster. I'll never recompute anything. I'll never check my work because deterministic hardware has my back. It's better to the metal, man, we're on rails. Nothing can go wrong. And that's it. That's the fundamental flaw right there because, of course, plenty can go wrong. It's called bugs. Hardware can have bugs like the Rohammer attack where writing memory and certain patterns can cause supposedly unrelated bits to just flip. Or hardware can do exactly what it's designed to do, but that ends up not being what we really want, like the specter and meltdown attacks that extract information that was supposed to be secret. But hardware bugs are just the tip of the iceberg. Software today is nothing if not a churning cauldron of bugs. Most of them just cause an app to flake out or possibly the machine to lock up, but who knows? In the CPU and RAM model, each bit of memory guarantees to remember the last thing the processor stored there, but that's all. Suppose in particular, some part of some particular program, two different bits were set to be opposites. Now, if we had implemented that in hardware, we could have used one of those not gates to express the relationship between one and the other, or we could have added an XOR gate that took both as input, which would produce a zero if the bits were ever the same. But in software, there's nothing that represents that relationship on an ongoing basis. The fact that those bits are supposed to be opposites is expressed by storing opposite values in them once and then trusting that for as long as needed. If a bug happens such that one of those bits got flipped, no one will notice. If we're lucky, the program has fallen off the rails and will soon crash. But more often, the train has really jumped tracks and now it's just going somewhere different than it was before. Digital hardware circuitry preserves individual bit values in memory, but it knows nothing about the relationships that those bits are supposed to have. So when a bug causes some relationships to be violated, no one knows. Software for traditional digital computing ends up building extremely complex, higher order functions where the outputs depend on the joint values of many, many input bits, but none of those relationships are explicitly represented in the machine. And that's why traditional computer security is a lost cause. Number one, hardware determinism isn't enough because all valid digital states look equally good to hardware. When the train hits a bug and jumps tracks, the hardware won't save us. And two, non-trivial software will never be completely correct. Sorry, the last bug is a fairy tale. Sure, we should try to write code that really does express what we want to happen, but we cannot pretend that traditional computing, CPU and RAM and hardware determinism will ever be seriously trustworthy except in the tiniest and most unrealistic circumstances. So now, consider an alternative. Instead of having all the processing in the CPU and all the RAM just dedicated to preserving individual bits with no idea about their desired relationships, suppose we had little bits of processing in memory intermingled throughout the physical space of the machine. Suppose if the machine wanted bit y to be the opposite of bit x, that would not be implemented by a one-time read compute write operation, but by a reconfiguration of some local processing hardware to perform a not operation from x to y. So if y got changed due to a bug, say, it would get restored to be not x in the very next moment. With this kind of approach, now the hardware is in effect performing higher order error correction for us on a program specific basis. It's no longer passive memory. It's a configurable circuit where each of its components are continually rechecking their desired relationships with their neighbors. And suppose the configuration of all that processing input is not just a boot time set up at power on, but can happen incrementally on a moment by moment basis. If the overall computation evolves such that y no longer needs to be the opposite of x, well, that local memory and processing can be reconfigured to do something else. The hardware configuration is part of the program itself. That means on the one hand, a proper program will be able to grow the configuration it needs from some simple starting point. And that seems like a lot of extra programming work compared to the breezy land of CPU and RAM, and it is. But on the other hand, what that additional work can buy us is the ability to incrementally repair, maintain and rebuild the computation under circumstances where traditional computing has long since crashed or been owned. Of course, this alternative way to compute doesn't guarantee there will be no more bugs. There will always be bugs, but it does mean that when the train jumps the track, it's much more likely to be detected quickly. And if we wrote that code that way for the errorful structures to be repaired or destroyed and rebuilt. And this alternative way to compute doesn't guarantee there will be no more computer security failures either. But it does fantastically raise the difficulty of an attack because the system is continually rebuilding and refreshing itself. And in a deep and fundamental way, totally unlike the brittleness of CPU and RAM, this sort of machine knows what it's doing. And this alternative way to compute, the way we're studying it, it's called the movable feast machine. And fleshing out this alternative way to compute is what the T2 tile project is all about. The T2 tiles provide hardware with distributed processing and local memory instead of central processing and random access memory. And just as programming languages and software libraries have greatly eased the burden of writing traditional buggy and insecure code, we are building analogous programming languages and softwares for the movable feast machine. It's slow going, and there's a long way to go. But the result will be living computation. And I invite you to join us in this journey. If you're an interested programmer, that would be super. Come chat in our Gitter link below. Or if you're a fellow traveler on related roads, let us know you're out there. It's about robustness, approximation, and interconnectedness. And we should all try to expect more from our computational systems. Happy holidays. I hope to see you next week.