 Computers keep changing the world, but their power and safety is limited by their rigid design. The T2TILE project works for bigger and safer computing using living systems principles. Follow our progress here on T Tuesday updates. Stop eating the glass sandwich. It's a metaphor for what society should do about computer security. Why is computer security so bad? I talked to some people and the answer is it's not bad. It says who? Compared to what? Computers really complicated. Everything is fine. And no, things are not fine. The computers that we have, we're taking them to, we're connecting them not only to our personal information and to our money, we're connecting them to physical devices like weapons and drones and airplanes and cars. Computer security is mattering increasingly more every day and we're not getting any better at making computer security. If anything, we're going the other way. Why is computer security so bad? You ask people, think, well it's because people refuse to patch their software or programmers are stupid and they write bugs or managers or don't care and they force crappy software to be shipped, systems that aren't ready. All of this is true and all of this is a tiny drop in the bucket. It's missing the main point. Why is computer security so bad? Because corporations are greedy and they will shift their costs on any direction that they can get away with. Why is computer security bad? Because people are cheap, they want everything for free. Why is computer security so bad? Because politicians are clueless, are corrupt, all of these things are true to a degree. All of them miss the forest for the trees. Why is computer security so bad? Because of the inherent architecture of computer hardware and software. Oh, snow on a giant roof that's not supported in the middle will cause the roof to fail? Hmm, what a surprise. That must be a failure of the owner of the... Well, it was probably a failure of the owner of the theater. It's like blaming the users for computer security failure is like blaming the car that was on the Verrazano narrow bridge when it collapsed. Blaming managers, blaming people. All of this stuff is like blaming the rain for having a brand new apartment building just completely fall over while it was being constructed. It's the architecture that's wrong. And the architecture, what about the architecture? It's very simple. It's this idea that you're going to have these perfectly square bits that the physics is going to get all of its stuff squished out of it until you get these mathematically perfect ones and zeros that you can manipulate just using mathematics. And in particular, you can have this idea of a CPU and RAM. CPU is where all the change happens, central processing unit. The RAM, random access memory is where none of the change happens. Nothing happens in RAM. Everything must remain the same until... Less than until the CPU comes and says to change it. So you can write something on step one of a program and then read it back on step one trillion of the program and it has to be exactly the same because that's the responsibility of hardware. Hardware determinism. If the architecture is to blame, like I say, well this architecture is attributed to John von Neumann, although a whole lot of other people really involved in it, even though John von Neumann got his name on it. So we're supposed to blame him for computer security being so bad? Well, maybe a little bit, but he warned us. He said, in 1948, the future logic of automata, the future computer architecture will differ from the present system because we'll have to care about the lengths of programs. Being able to write something on step one and read it back on step one trillion and have software get to assume 100% that it's guaranteed to be a correct or if it's not, it's a hardware failure, therefore software is allowed to do anything. It's allowed to go crazy if that assumption is violated, is going to have to be reconsidered. Similarly, all of the operations of logic, the steps of the program are going to have to allow malfunctions. What that means is hardware determinism, the idea that it's going to be guaranteed to be perfect, regardless of how long the computation is, is going to have to go away. John von Neumann told us this in 1948. At the time he thought there would be maybe 10,000 gates like transistors or something like that in the computer. By the time that we did this, we are now well over a billion gates and we're still not paying attention to it. By way of example, this is a cartoon of my laptop which in fact is at the moment being used to record the camera that we're using to make this update. The CPU is maybe something like a half a billion transistors and it's completely dwarfed by the memory which is something like 70 billion transistors. And again, these numbers are all just very vague, but they're in the correct ballpark and the point is they're huge. We're just making more and more and more random access memory with patterns and software all over the place, which is a veritable palette of zillions of bit patterns and combinations and instructions that if something goes wrong, no, not if when something goes wrong, when control flow gets diverted or a buffer overflows or there's a mismatch version system or whatever and control is given over something to an attacker, someone other than ourselves, someone other than just whoever we thought it was supposed to be when we designed the system. They now have available to them a huge palette to draw from and pull together a program to make things happen the way they want to happen. It's called a weird machine, the idea that once you can get a computer and knock it off its expectations, you can basically make it do anything you want. Why? Because random access memory is random. Once you get control, you get to look at the whole thing. This is not fine. CPU and RAM is sort of like step one in the way we've been doing computing since the Von Neumann machine era, since John Von Neumann's original work. And there is no viable alternative on the table as it stands. There's all kinds of ideas about how to harden it up and patch this and then get around that and so forth. But it's all the same essential idea. And there are two flaws, two both recent flaws, one called Spectre. It's a attack against CPU and RAM. It's a kind of speculative execution, the details we don't care about. But the point is, it's incredibly deep. It's incredibly hard to fix. We've been making computers more and more and more efficient, run faster and faster and faster, that one CPU in the middle, but ever quicker and ever quicker. In order to do that, we've been doing all this speculative execution and we've been counting on it not leaking information to the rest of the system so that bad guys can't learn things that they're not supposed to do. And it looks terrible. It really looks terrible, like being able to fix this. On the other hand, there's another thing. It's a little older than Spectre, but it's called Row Hammer. And it's the idea that if you just use computer completely according to the rules, there doesn't have to be any software bugs at all. You just sort of do certain accesses, read this, read this, read this, then read the other thing again, like a trick, like a con. You can actually learn, you can actually change bits and learn stuff that you're not supposed to learn. We are not going to make secure computers with hardware determinism and CPU and RAM. It's game over. This is not a widely held position, but it's a sincerely held position here. Traditional computing is a glass sandwich. And the idea is, you know, you've got physics at the bottom and physics is all full of noise and there's, you know, electrons and protons, everything all shooting in every different direction. And the idea is to make electronic circuits that tame all of that noise, that take all of those electrons that are shooting around and assemble them into lines of wire that move through and take all of those different possible voltages that represent collections of charges and categorize them. If it's bigger than this, call it a 1. If it's smaller than that, call it a 0. That is a tremendous act of redundancy. Because, I mean, through a single wire, you know, we had digital computers, there were analog computers. They could get three digits of accuracy in a single voltage with some error, but a lot of bits. If a digital circuit throws that all the way, it takes all of that information, all of those charges and makes it all a 0 or a 1. And that, in fact, is where the hardware determinism that we've built our entire house of cards of software on top of comes from. So that's the bottom piece of bread, that we actually have a ton of redundancy built in in digital hardware by its very nature to give us behavior that we can predict. And then in the middle, we have this software and the software, because it has this guarantee, hardware determinism guaranteed. If anything goes wrong, if a bit gets flipped, if something doesn't stay the way it's supposed to do, it's hardware's fault. So the incentives are wrong. Just like the incentives for software manufacturers that don't have to take responsibility for the bugs in their code, software doesn't have to take responsibility for reliability or robustness at all because of hardware determinism. So we get these house of cards, this prickly brittle. People don't want to call it. They deliberately make it brittle. But it is. Why? Because when you make something more efficient, you automatically take out redundancy and you make things more fragile and more brittle. Now, at the top, in big computing, you have data center features like failover hot spares and all of these things that can also be done by end users that gets you a little bit more robustness again. But the problem is you have this glass sandwich. You have this brittle, sharp, unpredictable that when something goes wrong, you have really no idea how bad it's going to go wrong. And this picture makes it look like a nice little hoagie that you get at the store. But in fact, really, it's more like this. There's a ton of software. It's overstuffed sandwich. It's glass upon glass upon glass. And you're lucky if you have any data center features on the top. We have to stop eating the glass sandwich. What we want to do instead is take all that thing which has grown all gigantic and shrink it down and have lots and lots of them. Little internal physics that have robustness features at each level, software that's not spanning a tremendous amount of reach, but in fact is small, is designed to be robust and has its own internal goals. Without even worrying about what is going on in the big picture, this piece of software and hardware, this combination has regularities. It has homeostasis. It has the authority to take actions to keep itself together. That's the way to get to robust versus computing and then take lots of those little hamburgers and build a distributed system on top of it. Computer security is terrible. It's going to get worse unless we do something about it. We've kind of painted ourselves into a corner and so a lot of people are not really even admitting it. They're saying, yeah, you're supposed to be in a corner. It's cozy in the corner. Everything's fine. This is fine. Computing today based on hardware determinism is deliberately and inherently optimized for brittlest, fragile, fragility and insecurity. All of the things that you get taught about, binary numbers, CPU and RAM, efficient algorithms, they all contribute to that. Universality, the idea that you want to make your program, your computer is a universally programmable computer at a nice low level? No, it's terrible. How are we going to fix it? The fix it is to look from not from correct and efficient computing but from robust first computing. It's not going to stop until it wies up. There's nothing wrong with current computer architecture. There's nothing wrong with my non-machines as long as they're tiny little hamburgers, little sliders, not 12-foot huggies. We've got to find ways to do better. We've got to find ways to do best effort, robust first computing. Step one, abandon hardware determinism. It's an incredibly tough pill to swallow but determinism is a property of small systems. We've already vastly overstretched what we could do with it. Delay universality. The idea is to make everything as programmable as quickly as possible. Get it to where you can bootstrap everything in terms of itself. That's a fun stunt, but it leads to vast unpredictability when something goes wrong at that level. J-Prespa record. Reflecting on the development of ENAC, he was one of the engineers who actually did a ton of the work that became called the Vanoi machine. We carried out a very radical idea in a very conservative fashion. That's what the T2 Tile project is about. We're going to get rid of random access memory and instead brings processing and physical space back into computing. The re-spatialization of computing. And we're doing it in a very conservative fashion using Linux boxes and off the shelf hardware wherever we can. All right. Let's stop there. Stop eating the glass sandwich. This is the 40th T Tuesday update. Let's get into it. Our goal for this episode was to have a ring lotus. Running a ring lotus is 133 tiles or more running on the wall, ready to be benchmarked. We didn't get close to that. Mostly because software is hard. I thought hardware was hard. Software is hard too. This is what happens when you're building a whole computational stack. What we did accomplish is something more like this. A 3x3 grid of tiles that talks CDM, that's the common data manager, and the moves, Feast Machine Flash Traffic at once so that we can send commands around saying reset yourself and so forth. It's a more modest goal, but then that's what we actually managed to accomplish. Going forward from episode 40, this is the goal I want to set for us. Goal for July 26th, get the first MFM intertile event happening. Get it running. Get Dreg, Res, Fork Bomb, whatever it is. Starting on one tile and ending up on another tile and continuing from there. That's the goal. Why July 26th? Because July 27th, we're leaving for Newcastle upon Tyne for the Artificial Life Conference. I would dearly love to be able to bring some tiles along to do a demo, but even if we don't bring them along, getting an event running by then is the goal. Alright. Almost out of time, there's been, so we've got, I'm printing up feet. Look at this. This is 400 little screw-in feet. Well, this is 100. This is 400. Still going to have to do a lot more of the framing up and the going back and visiting the acrylic stuff and so on and deciding what to do about that, lots of stuff to deal with. We need also the power injectors. We've done most of the DP intertile connectors. We've done all of the DOs, the one that are going to go around the power zones. The power injectors are still at Osh Park. The boards have not yet come back, but we've printed up the stuff to go with it, so most scary is when I was dealing with all this hardware stuff, I was starting to see cases where it appeared when you powered up the whole grid, everybody would come up unless they had one of those system D failures, but if you rebooted one guy in the middle, some of them, in cases I don't understand, would fail to boot and it would fail to boot bad. I mean, even if you had the serial cable there, there was nothing and the hypothesis is that there's some kind of electrical noise that's coming in from the neighbors that is enough to mess up the, when Beaglebones boot, they sense the state of certain lines to decide how they're supposed to boot and I went to great trouble designing the board to take that into consideration, but it seems like maybe there's a problem there, we will see. In the software case, Anton Mikhailov, Marius Cezynski and AJ Zaf are going great guns getting a movable feast machine running on GPUs. Again, this is an outreach and a way to live in the future, a way to prototype things, even though the GPU is only finitely scalable, it'll let us try lots of things out. I would love to tell you the story of the bug that I fixed this week, but I'm out of time, so I'm just going to say it was a combination of hardware and a software bug simultaneously that were each sort of covering each other up and of course it was my own stupidity that got there to begin with for the software bug in particular. It's now fixed and the only thing I can say for it, aside from the fact that it cost me two days or more, was that I ended up designing a whole new language so that you can configure the packet transfer system to say when a packet, it's sort of like a little trace point, that's what it's called, a trace point, whenever a packet moves in or out of one of these buffers on the way in or out of the system, you can check it and say if the mask under this fits this value, print it out and that was key to running down the bug that turned out to be the combination of hardware and software. So far from Ring Lotus Run on the wall, a giant failure rather than worry about it too much, I want to take it as being a success for what it is. It's not success, it's failing with style. And finally, again the goal for the rest of July basically is to get an MFM event happening between two tiles. That will be a giant milestone even if it won't be quite ready to benchmark Dragon Reds physics. The next update will be out in a week. I'm glad you're here. Have a good week.