 Good evening and welcome to C4's von Neumann Public Lecture series in complexity and computation. My name is Jessica Flack. I'm the co-director of the Center for Complexity and Collective Computation here in WID or C4. Now before introducing tonight's speaker, Dave Ackley, a few words of thanks to WID's IT and admin teams who help organize this series. I'd also like to highlight that the next public lecture, which is on our networked world, will be given by Raysa DeSousa, a physicist at UC Davis, and that's November 5th, so please join us for that. It should be a fun lecture. And finally, a word about support for the series. Last year the lecture series was supported by the John Templeton Foundation and a generous gift from John Wiley, the former chancellor of UW. And this year we are seeking new sources of funding, and I really mean that. If you are enjoying the series, please consider making a donation. A donation about how to make a donation is up on these slides here and also can be found on the WID and C4 websites. And I really want to emphasize that only with your support will we be able to continue this series, so any kind of gift would be appreciated. Thank you so much. Okay. So tonight's lecture, while John von Neumann, the scientist for whom this series is named, is considered one of the founding figures in computing. In addition to his many accomplishments in mathematics and quantum mechanics, he developed one of the first computer architectures which could support complicated programs that could be developed and debugged relatively quickly. Now, this was many years ago, almost six years after its development, the John von Neumann architecture remains an important part of the designs of modern computers. And the fact that it's persisted for as long as it has is a testament of course to its quality, but it also perhaps is an indication of what, in the study of technological and evolutionary innovation, we call lock-in, a inertia that prevents adaptation to a change or changing environment, even though that adaptation is critical for survival. So current challenges to machine development, as of course I'm sure most of you know, include CPU scalability and increasing sophistication of the algorithms and viruses used by hackers to gain unauthorized access to data and networks. Now, von Neumann recognized the limitations of his designs, perhaps because he was also the founder of the field of game theory and the inventor of one of the first computer viruses. So he understood that robustness would eventually become a critical part of any architecture. So tonight, Dave Ackley is going to drill down into von Neumann's prediction and tell us about how he might use ideas from living systems and the study of artificial life to improve the design behavior and robustness of our computers. Dave Ackley serves as the associate professor of computer science at the University of New Mexico. He's got degrees from Tusk and Carnegie Mellon. His work focused on neural networks and artificial life in the 1980s, encrypted and distributed social networks in the 1990s, and biological approaches to computer science in the 2000s. And since 2008, it's focused on research and advocacy for robust and scalable computer architectures. I first heard him talk at a complex system summer school some years ago where he spoke about homeostatic architectures for robust computing, or at least that's how I remember it. It's a talk that's stuck with me over the years and influenced my own thinking about how nature computes, which is one of the things we work on in C4. So it's a real pleasure to welcome Dave tonight. He's one of the most wacky, creative and fun scientists around. Jessica, thank you, everybody. Thanks for coming. This is great. This is really exciting. Artificial life for bigger and safer computing. I've been giving various versions of this talk for six years now anyway, or even longer if you reach back into the thing. It sort of turns out you sort of back into having a point without even knowing about it, and I finally got around to admitting it. And I realized that there's something that seems really obvious to me that does not seem sufficiently obvious to enough people. And so what I want to try to do is try it out on y'all tonight and see if I can get you to get it. If you have a quick question, I'd be happy to take it as we go through. We certainly can take questions at the end. I have a tendency to talk fairly long, so experience says I should start with the conclusions. Here they are. The way we compute today, the way your phone, your tablet, your PC computes today is inherently and ultimately unscalable. It's not going to be able to keep getting bigger and more powerful using the design that we're currently using. Worse, it is unsecurable. The way we build computers today, the way we program computers today, it's essentially impossible to make them secure. It's easy enough to blame the user. Did you update your viruses? It's easy enough to blame the programmer. You made a bug, but that misses the bigger point. The bigger point is the architecture itself, the way the computer is designed, makes it virtually impossible to be correct, to be secure. So I want to try to make that point. The alternative, and it wouldn't matter. If things were really terrible, it wouldn't matter if there was no other choice. But there is another choice. If we are willing to look at living systems, everything from us down to bacteria, perhaps beyond, as kinds of computations, as kinds of machines, they manifest a very different architecture of computation from what we are using to build the machines that we use. We can have robust, indefinitely scalable computations if we model them in the way that living systems work, viewed as computations. And if we do that, we will get computing systems which will be much bigger, which kind of excites me, but also much safer, which should excite everybody. So the action item I want to leave with all of you is to ask this question. Why are we racing to entrust our valuables to gullible idiots instead of fleshing out the alternative? The computers that we have today, in a very literal sense, are gullible idiots. Perhaps idiots savants, and that's why we want to use it, but idiots nonetheless, and gullible as the day is long. There isn't a con that a computer won't buy. It doesn't have to be that way. We accept it. We shouldn't accept it. We are in crazy land. That's the message. Also, at the conclusions, we thank all the contributors. These are some of the ones that we have. These are the students and faculty that I work with, the funding agencies. This semester, I'm doing a seminar with undergrads and grads that we are trying to beat on this new model, beat on this alternate model of computation that I hope all works well. I will demonstrate for you tonight. So we're trying to learn the easiest things. We're trying just to bring the biggest bear traps to understand how these kind of models work. OK. Oh, yeah. And if there's any problems, I get to be responsible for them. You can kind of get the idea about the ill-advised rants. I've already started doing that. OK. So here's what I want to do. First, I want to deliver on what Jessica mentioned in the introduction of von Neumann's prediction for the future of computation, why it hasn't happened. In the story of two approaches to computation, we've already alluded to. Second, I want to just quickly explain the meaning of life in case anybody is not clear on that and explain why the pagans had it right and we've been sort of going wrong ever since then. Finally, well, next, computer science and computer architecture is political science. The way to understand why the architecture we have today is so messed up is to think of it as a society. Think of the computer as a society and say, what kind of organizational system are we talking about here? If we were going to make an analogy between what the computer is doing and some kind of system that we would see among people, what kind of system would it be? And the hint is it's not a system that we want to live in. And then finally, I really want to save time to try to do some demos to see the beginnings, the absolute simplest, stupidest, lowest level sorts of properties that come out of a different approach to doing computation. All right, start at the top. I really used to hate John von Neumann because he made this von Neumann machine, which I realized intuitively had terrible problems, but I couldn't exactly pin down why. The whole idea is summed up right here. It's a contract. Divide the world between the hardware people and the software people. The hardware people take the unruly, nasty, noisy physical world and turn it into logic, turn it into nice square bits that go from one to the next according to mathematical rules with absolute certainty, asterisk. Hardware turns physics into logic. Software, on the other hand, turns logic into functions. And the story is hardware by itself is worthless. It's a doorstop. It's a room heater. Hardware plus software has to do something that's valuable enough to cause people to cough up enough money to pay for the hardware and the software. That's the computer industry. It's worked great. The computer industry eight years ago was accounting for 10% of the world economy by some measures. It hasn't gotten less. The way that it works is called serial determinism. The way that computing works, the von Neumann machine, you do one thing at a time. Serial, step by step by step by step. Determinism means the outcome of each step depends only on the inputs that were there immediately prior. If you know the inputs, you know the output for sure. If you know the new inputs, you know the output for sure. It's deterministic. And that's what allows you to program by making logical inferences. And it's been this gigantic success. But as we know from the abstract, there was one guy who said this approach is going to fall down. And that was also von Neumann. And then I decided I didn't actually hate him quite so much. Although, this apology, the future logic of automata, the future way computers work, will differ from the present system, that's the von Neumann machine, in that the actual length of operations will have to be considered. That's crazy. We have programs today. You play a solitaire program. It's hundreds of billions and trillions of steps, one after the other, just to put the damn queen on the king. If any of those went wrong, what would happen? Who knows? He's saying we're going to have to worry about the length of the program, because the longer the program is, the more likely it is something to go wrong. The operations will have to be allowed to fail. That's exactly what determinism doesn't allow. He thought this would happen by the time computers got to the level of about 10,000 gates. He called them switching organisms, switching organs, sorry. We are now at something like a billion switching organs, gates in everything, in your phone, soon in your watch. And we're still not listening to his prediction. He offered a way to understand the alternative, having to do with error. It's the comparison between artificial machines and living systems. Natural organisms are designed to hide errors, to make errors as harmless as possible, to ride out the errors, keep on going, heal up later if you can. Artificial automata, machines, computers are designed to make errors as disastrous as possible. Why? Because if you've ever tried to debug a program, if one thing has gone wrong, it's hard enough to figure out the problem. But if two things have gone wrong simultaneously, it's virtually impossible to figure out what went wrong. Now the possible combinations become astronomical. The only safe thing to do is to stop the insanity, the instant anything goes wrong. And that's how computers work. And he's saying we shouldn't be that way. Our behavior is clearly over caution generated by ignorance. I'm not going to go through this whole table. This is just kind of summing up the different kinds, the two sort of approaches we have to, where's my mess, there it is. So the finite approach is the von Neumann machine. The indefinite approach is the living systems approach. An algorithm, the key thing about an algorithm is that it's finite. It has a bunch of step, step, step, step, and then it ends. As opposed to a computational process that goes on indefinitely, reacting to things, taking input, producing output, and continuing to run. And so forth. I made a nickname for it. The approach, the von Neumann machine approach, the reason it's so enjoyable is that you are God. You are the master of the universe. Nothing happens except what you decree. Change that bit, stay the same. Yes, sir. And that's great work if you can get it. The indefinite approach doesn't work that way. The indefinite approach, there are other things happening while you're living. Sorry. The way you left it might not be the way it is when you go back and look again. Member of the team versus master of the universe. What I want to do rather than going through this in detail is just take a few key ideas from the knowledge base of computing and understand how this duality, this complementarity between the way we traditionally look at computing and the indefinitely scalable approach, the living systems approach, because they are deeply complementary. When I teach in classes, we have a thing called define, defend, and attack. The point is, you don't actually understand something unless you can define it, unless you can say something good about it, and unless you can point out a limitation to it. There's a germ of truth in every idea seriously proposed. No idea captures it all. Define, defend, and attack, DDA. Deterministic hardware, we just talked about it. Define, what is it? Programmable machines that guarantee 100% predictable behavior. Well, what's good about that? Programmers can focus exclusively on features and performance. They don't have to worry about what happens if a step didn't work. What's the drawback? When the inevitable does go wrong, there's no plan B. The programmers focused exclusively on features and performance. They didn't focus on error handling, because the hardware guaranteed there'd be no errors. Absolutely isn't quite right. There actually is a plan B. Welcome to plan B. This is it. Something goes wrong in the universe. That's the only thing you can do, because there's no conceivable way to go forward once the guarantee has been violated. All right, let's do another one. Binary numbers, what are they? Binary numbers are like regular numbers, 123, except each number is just powers of two, so they're all zeros and ones, okay? Powers of two, so it's just like powers of 10 if you only had one finger. What's good about it? It's the most efficient possible representation, digital representation of a set of alternatives. Most efficient possible, mathematically provably optimal. On the other hand, it's the most error sensitive possible representation of the set of alternatives. You can flip one bit in a typical computer, a typical representation of a binary number. You can flip one bit and the answer will be off by two billion. Close, yeah, yeah, that's my income. These are not coincidental. These are flip sides of the same coin. The very thing that made it so efficient makes it so prone to error, so to cause such great damage if an error occurs. It's a duality. And yet in computer science, in computer engineering, in the computer industry, we've really only ever focused on the defense. We've completely closed our eyes to the attack. All right, one more. The idea of universal computation, backbone brilliant theoretical breakthrough during Von Neumann, the idea of dividing a machine into fixed hardware that does the same thing over and over again, and programmable, flexible, modifiable software that allows you to change what the machine does. And if you do it right, defend, we can arrange so that one particular machine, one single machine, is actually able to compute anything that can be computed. That's the incredible theoretical result. That's why the computation is universal. One machine compute anything that can be computed. Wow, that's pretty good. But you should get the idea now. So what's the attack? What's the attack? A single machine takes a single flaw can be made to compute anything else. You get the shell shock bug. That's one teeny little problem in code that's in millions and millions of machines. And all of a sudden, that machine is sending passwords and spam to Russia. A single flaw means the machine can now be reprogrammed and that universality, which was so good for us is now equally bad for us. Okay, that's the idea. That's the duality that we haven't recognized enough that's put us into the situation that we are in now. Okay. So, computer science, engineering, industry, everything is incredibly invested in the efficient approach. The idea that all you have to do is make it be correct. That is to say, get the features the way you want. Von Neumann's prediction was expecting us to switch to emphasize robustness so that even if we didn't get the exact right answer, we'd get close. Or even if it didn't work out quite right the first time, we would check our work. Computers don't check their work. That would be stupid because the work has to come out the same way. There's a problem that an evolutionary system tends to drive out robustness. You could try to make a system that was really good, that everything was built like a tank, everything was done 12 times to be sure. Well, but then if there's something with the mutation, whatever it is, or even just changes in the marketplace, if you say, well, I'm only gonna do it 11 times, that's gonna make it a little bit cheaper. And if you're faced with one thing that works great, another thing that works great that's cheaper, which one are you gonna buy? You're gonna buy the cheaper one. So there is this problem that even if you try to do it right, there are these pressures eating away at you, decaying away robustness, getting you right to the edge where the thing will last until you don't remember where you bought it and then boom, that sort of thing. Now, that's only true if we're living in a stable environment, in times of war, in times of chaos, robustness comes into its own. Those little weenies made out of tinfoil and soda straws, they're gone in the first wave. The original Betamax recorder that weighs 80 pounds survives the nuclear blast. So the question is, how can we have a system be robust when even if we try to do it right, there's gonna be this relentless pressure to chip away at the robustness if it's not being used. The answer I suggest to you is, even if things are stable, if we are living in the land of plenty, we may be able to afford some robustness along with our efficiency. Now, are we always living in the land of plenty? No, are we usually living in the land of plenty? No, but in computers, we are living in the land of plenty. We're gonna go from a billion gates to a billion and a quarter in the next generation. Intel doesn't know what to do with the gates. Traditionally, it would give them to Microsoft and say make your software 20% slower, but that game has kind of run out. Now we've got more transistors, we don't know what to do with them. I know what to do with them. Let's buy some robustness. All right, that's my first story. Let's talk about the meaning of life. This'll be quick because yo, I'm taking a lot of time. This is, I just stole a couple of slides from one of the videos that I've got on YouTube. If this doesn't make sense, that video won't either. My field of research, I've got a lot. I kind of go from one to the other, but sort of the one that I spent the most time in is called artificial life. People confuse that with artificial intelligence. It's not the same thing. Artificial life is about building systems, artificial systems that somehow act like living systems, which from one point of view, excuse me, wait, like life is supposed to be natural, so artificial life is like, yes, no, something like that. We have to figure out a way to understand what life is that allows us to make it in more ways than one. The phrase artificial life goes back to Chris Langton, Atlanta at the time in 1987, and the definition that was offered was the study of life as it could be, not just life as it is. And the idea is we could use computers to study life as it could be by building models. That leaves us wondering, okay, yeah, and what is this life that could be something else? So let's take a stab at it. Here's the dictionary, an old dictionary, a public domain dictionary. The state of being, which begins with generation, birth generation, ends with death, anything that happens in the middle, yawn. What a horrible definition that is. That's utterly gutless. It's the same as Monty Python's meaning of life. Never actually defines it, just shows what happens inside it. Okay, but we wanted a characterization that would allow us to look at other things and say, is that life, yes or no? Here's a couple of definitions. A self-sustaining chemical system capable of undergoing Darwinian evolution, okay. A self-organized non-equilibrium system, such that its processes are governed by a program and it's stored symbolically and can reproduce itself, including the program. These things have some things in common, self-sustaining, self-organizing, something about self and something about systems. Gerald Joyce is a chemist, Lee Small is a physicist. These things are very complicated definitions. I wanna boil it down as simple as possible to try to get at the essence, what I think the essence of life is, which I wanna offer to you. My favorite definition I got from my dad. There's a surprising amount of truth to this. I don't know where it actually came from, but this is the definition I wanna offer you. Life is systems that dynamically preserve pattern. That's it, that's it. Wherever you see a system that's dynamically preserving its pattern, it's working, it's struggling, it's consuming energy. To keep its pattern together, that is life. It's got a piece of the answer for sure. But we're gonna have to admit that there are some systems that are gonna be kind of like dynamically preserved pattern that we wouldn't normally consider life. My favorite example is to imagine a little eddy in the stream walking down. There's a little water flowing, it makes a little whirlpool. It persists, the water circling around, circling, circling, it's a pattern. It's dynamically preserved. It's actually sinking a tiny little bit of energy of that stream as the water goes around. But it's so fragile. You hit one little pebble, the thing's gone. According to this definition, we're gonna have to say that has a little bit of life. It's a little bit life-like. And then things that can preserve their pattern in a much wider range of circumstances, like people or cockroaches, something unbelievably life-like, are gonna be more life-like than the eddy in the stream. And that's the basis of computational paganism. The way that we need to understand living systems is not as, yes, you are alive, no, the fire in the forest is not alive. But in fact, there's a spectrum. And we're gonna chop off the spectrum at different places for different purposes. Is that unsatisfying? Yes, to some degree it is. Too bad. The distinction, if you need a yes or no distinction, that's on you. That's not a property of the actual world out there, okay? With computational paganism, we're in good shape. Now we can write computer programs that have bit patterns, that preserve themselves, that copy themselves, that do whatever they want. And those things, by the principle, are gonna have a degree of life to them. All right, part three. We're doing okay. So computer architecture is the basic design principles underlying a machine. Computer architecture, in an important sense, is just like real architecture. It has to do with the use of space and putting things near other things that need to be working together. It's just tailored for the needs of a computer. In order to understand what I wanna say here, we have to think about what's the right way to imagine what a computer is, okay? What is a computer? How should we think about a computer? Here's my computer in my hotel room this morning. Long it takes me to prepare a talk. This is probably what people will tend to think a computer is. But really, this is just its face, right? That's like saying, this is me. Which, to some degree, it's true in terms of what expression I have, what emotions I probably have, this good place to look. But if you wanted to understand how I'm gonna behave, what decisions I'm gonna make, this is not the place to look. Where do we need to look? Well, we look inside. I did not tear open my machine. This is from Wikipedia. It's the picture of the motherboard with all the chips and the fans to get rid of the heat. We gotta look closer. Is it here? Is it in one of these chips? Well, we're getting closer, but it isn't there. And then those chips, those are weird things anyway. What if we went inside one of those? You look at one of those chips, almost all of it's just stupid plastic. The whole thing, it's just plastic to carry these tiny little wires away to get to the pins. The thing that's making the decision has got to be further inside. Here's a picture of a chip, a single chip. We can get a little closer. This is where we need to look to understand how computer architecture works. On this particular thing, I don't know all the details of it, except this stuff up here is memory. The vast majority of the gates, in fact, of this thing is memory, even though it's a fairly small and old computer. It's actually just a computer to help with the I.O. of a bigger computer, but that's a separate story. This stuff doesn't change at all. Everything that's up here is slaves, passive slaves. Your job is to remember zero. I'm zero. Your job is to remember one. You're also a one. That's it, that's all they're doing. And the control, the agency, the authority, the ability to make changes is down in here, in the central processing unit. And that was the essence of the von Neumann machine architecture. A centralized place that made all of the decisions. And then this vast ocean of completely equivalent worker bee drones that did nothing except remember what they were told and cough it up when they were asked about it later. That's called random access memory, RAM. You buy a computer, it's got eight gigabytes of RAM. That's eight gigabytes of these pitiful slaves that's saying, yes, zero, yeah, still zero. If we were to live in here, would you like to be one of those little bits? It'd be a pretty dreary existence. It's kind of like, you know, there's this big drum going boom, boom, boom, boom. It's telling everybody, what are you doing? What are you doing? I'm a zero, I'm a one. Then there's this tiny little region where everything happens, the CPU. You get picked, you get up, you get down to the CPU, you get one added to you, and you get sent back to memory. That's how it works. That's what makes it work, serial determinism. That's also what makes it a terrible, impossible to make secure device. Think about it. Every change that happens in this machine happens in one place, the CPU. It's not like we have a mudroom at one end of the house and the living room at the other end of the house and mom in the middle screaming at you if you still have your shoes on. There's only one place where everything happens. Let's look at the kitchen. The stuff that happens at the CPU are the instructions that correspond to our most trusted, deeply held personal internal stuff and also the scum of whatever the internet dragged in. All of that stuff gets processed in the exact same spot, the exact same microscopic little bit of silicon down here, so what happens if something goes wrong? What happens if there's a bit flip? Worse, what happens if our software had a bug in it and somebody knows about that bug? They can then switch the CPU from doing what it had been doing to doing essentially anything in one step. Is it possible to write secure software to make a secure system with this sort of thing? In principle, sure. In a mathematical sense, sure. Just like in a mathematical sense you could stand an entire deck of cards end to end vertically and it could stay up. It could happen. Are you gonna count on that for your financial information? That is really where we're at. We're taking more and more of ourselves, our valuables, our lives in some cases and we're putting them in control of machines like this that have all of their decision-making power, centralized. Computers might be all touchy feely and happy and show us our Facebook friends but if you lived inside of one it's a nasty, horrible fascist slave existence. The individual bits, and you're not part of the CPU, statistically speaking and even if you are, you're just one 10-year-old bit of the CPU. The political science of the inside of the machines we have today are as a centrally controlled, centrally planned, centrally run dictatorship. And just like, well not just like Mussolini, in one way it's extremely efficient. There's no redundancy. But on the other hand, if anything goes wrong, all bets are off. There isn't anybody down there saying, wait a minute, I'm not just holding a zero, I'm holding the important part of how much money we've got. It doesn't make sense for me to be a one. There's none of that. There's no individual agency. All there is is you passively do something, I told you what to do, don't ask why. That's where we're living. Every computer you've got is that. It's kinda sad. There is an alternative. We could think about what would it mean to give a little teeny bit of silicon, to give a little bit of computing power initiative, autonomy, agency. Could we even conceive of such a thing? Sure, it's easy. We already gave it to the CPU. So let's just make the CPU really small and make a whole bunch of them. And now it's like everybody in this room. So, blah, blah, blah, blah, everybody's thinking their own thoughts, this guy's kinda cool, sort of stupid. Whatever it is they're thinking. All at once, when we do that, there's a ton of waste. We're all thinking the same thing half the time. We could have gotten away to just have Jessica think that for all of us. That's the way it would have worked with the von Neumann machine. But instead, no, we all think it ourselves. Incredibly redundant. But that's what makes it robust. If something, God forbid, happens to Jessica. There's plenty of other people thinking the thought. It doesn't feel like redundancy when it's me, when it's you, when it's the other guys. Yeah, maybe it does. We can get robustness, we can get scalability. We just have to figure out how to architect a computer to be something more like free market democratic capitalism. Bottom-up autonomy with distributed agency. If we could do that, don't you think it would be un-American not to? I ask you, but we're not doing it yet. We've got a lot more that we need to do. So that's the real point I wanna leave you with as far as before we actually talk about, well, actually try to demo, try to see what this might actually be like. A computer is made of billions and billions of individual little gates connected by wires so that they can interact with each other. Each of those gates is actually capable of making a decision. An incredibly simple, incredibly tiny, but legitimate decision. Between this stuff says I should be zero, this stuff says I should be one. I think one. Each bit of a computer is a non-linear element that can take some inputs and then bend it, make a decision, okay? That is an opportunity for each tiny little bit of things to have a tiny little bit of agency, a tiny little bit of autonomy, independence, to use that tiny little bit of decision-making power. And what Von Neumann was telling us in air handling was that we were just being gutless weenies by having our machines blue screen of death when anything goes wrong. Because we hadn't done the work to give the machines a plan B. We hadn't done the work to say we could arrange computation so that we're doing everything dozens of times, at least unless we get really, really strapped. And so if something goes wrong, who cares? Almost for sure it's gonna get washed out by all the other guys who are doing it right. Now you know how that story works out. It's that evolutionary problem of robustness going away. I don't have to be robust because someone else will be robust for me. So nobody's robust. Now we're gonna address that. We can build computers that are more like free market economies that will do the work as many times as they can. They'll do the work as many times as there are coffee shops on every block. One of them closes, you'll never know. Did I walk a little further today? Yeah, I don't know. Okay, let's do a demo. Let's suppose we could actually build computers that were organized this way. That we're gonna be able to make them as big as we want. How is that even possible? Piece of material is physical and finite. It can't be as big as we want. I can want pretty big. Well, how we're gonna do it is we're gonna break it down into a tile. We're gonna say a rule for a computer to be satisfying. I feel your pain. I'll try to be more interesting. To be acceptable, a design for a computer is not something that just comes to the end and says that's it. A design for a computer is a design for a tile that we can plug together. We can fill space by tiling together these little individual pieces. That's the design of a computer. And then we can make the thing as big as we want. We just buy more tiles, plug them together, and the computer gets bigger. We're gonna have a whole nother level of needs for our computations to be robust. Because we might be plugging in stuff while they're running. Or, whoops, I disconnected the whole South 40 and it's gonna have to let that stuff restart. So, an indefinitely scalable computer, an indefinitely scalable computer architecture is a rule for filling space with tiles, little computers. And we've been working on this. Here's our first one. This is the tile that we built in 2008 and designed it in 2009. It was actually briefly for sale. Some people bought them. No, some truly adventurous, really cool folks. And including some folks at NASA and so forth as well as just hobbyists. This thing here, that's the chip in the center, that's a CPU, it's a CPU. It's basically a 2007 smartphone like that, which you wouldn't spit on today. 70 megahertz, 32 kilobytes of RAM. And we bet a lot of money for them because we were buying them by the ones instead of the one millions. It's got connectors to talk to the neighbors. They exchange power and ground and bits. They talk to each other and so forth. So you can plug these guys together. Here's four of them plugged together with little connectors. See how it goes? You can hot plug them and they'll all just start running. Here's a schematic version of the same thing, okay? This is meant to be four tiles. This is meant to be like this except sort of stood up. Here's one, here's one. They're talking to each other in all directions, okay? Let's take a look at them. See if this is gonna work. All right, hopefully you didn't see a thing. These are our four tiles. They're now being simulated on my laptop. They're actually four separate tiles. We can show them separately if we want, but it's typically more convenient to squeeze them together to hide the fact that they're actually physically separate because they have communication channels so that when something changes on one of them, they tell the next one to try to maintain a consistent view of the whole thing. So let's squeeze them back together again. All right, this is a small one. Let's start with a bigger one first. All right, here's another one. This is five by three is 15 tiles, okay? Now the idea is there's a grid here. Each of these teeny little squares here is one little site that we can plop something down and do a little computation with, okay? And we've got this table of elements that we can grab from like a pallet and paint with them. So let's see, we'll get our brush here, we'll get this stuff, whatever this is, and we'll plop one of these guys down, okay? And then we let it run. Whoops, where'd he go? Oh, there he is, he just can't, let's get rid of the background. There we go, okay. This thing that you see floating around here is an element called Dreg. It stands for dynamic regulator. And it was the first element I invented when I got this stuff going around 2010. The way Dreg works it's this. When it's his turn to go, he wakes up and looks north, south, east or west, one direction at random and says, what is in there? And when it looks in one of its neighboring squares, it's either occupied or empty. Occupied, he throws a random number and maybe erases it, whatever's there. Doesn't check and see if it's something important. Doesn't check to see if it's one of a kind. Doesn't check to see if it's in the middle of an important computation. If its number comes up, erase it. On the other hand, if the spot that it encounters is already empty, it throws a random number of different odds and creates a res. Oh, and you can see we've got seven res in the world. Now those are the brown ones and two Dreg, those are the little gray ones. So the basic rule is if it's occupied, throw a random number and maybe erase it. If it's empty, throw a random number and maybe create a res. The only exceptions are if it's empty, there's a very low probability of creating another Dreg. So the Dreg will reproduce itself with very low odds. And what happens is the world gradually fills up with this mix of res, which stands for resource atom. It's like mana. It's like the fundamental goo that you can make anything you want out of. And these Dregs, which on the one hand are the source of all mana. And on the other hand, they are the source of random death. And this Dreg is a canonical example of how we beat that evolutionary trade-off of losing robustness over time. We make the source of resources be the source of destruction. We don't wait for a cosmic ray to come hit the memory and flip a bit. We don't wait for an attacker to finish with JP Morgan and come after us. We attack ourselves continually to stay on our toes. If we got rid of all the attackers, we'd get rid of all the resources. It's the same guy. So this is gradually filling up. I've got, this is like a cooking show. I've got a version that's already going on a little further. So this is what it looks like a little bit later. The world is filled up with about 35, 40% of it. Of something, mostly res, some Dreg. Now we can do other stuff with this. And I'd like to see if we could do it here for you. Let's see. So I'm gonna make myself an M, E-M atom. That stands for emitter. Let's get the grid back here. All right, so I'm gonna plop an emitter down there. Okay. And those blue things that it's just started popping out are data. This is a thing which is now emitting data items, which for our purposes is random numbers into the grid. Now at the other side, we're gonna get these guys. See, and these are consumers. They will pull data out of the grid. So I'll put a couple of those guys down there. And you notice the consumers and the emitters spread vertically all by themselves. That's a hallmark of how robust computations work. They rebuild themselves from a seed. You don't have to make the whole thing yourself, you build a seed. All right, and then finally, I'm gonna take one of these guys, the SR. This is a sorting element, and I'll plop them right in the middle. Let me see it here. The way these red guys work, the sorters. Yeah, now we're going. The first thing that they do, when it's their turn to go, they look around them and say, is there any res in the area? And if there's any resources, they convert the res into sorters. So they reproduce themselves opportunistically, uh-oh, there we go, based on the availability of res. And after they've done that, they look to their stage right and say, is there any data item there? And if so, is the number inside him bigger or smaller than the number I've got inside of me? If it's bigger, then I'm gonna try to move it from my left to my right and put it below me. If it's smaller, I'm gonna try to move it from my left to my right and put it above me. And then as the final step, I take whatever the number was on that data item I just moved and I make my threshold be that number. So what each of these sorter guys is doing is a little quantum step of sorting. They're not actually sorting a bunch of numbers, but they're making it a little better. Bigger numbers are a little lower, smaller numbers are a little higher, and they've moved from the right, where the emitters are towards the left, where the consumers are, okay? We have an alternate way that we can draw these, put the buttons up, we can color them instead of blue for data and red for sorters, we can color them according to their value, their numerical value. We get a picture like this. The lighter colors, the whiter colors, the brighter colors are small, the darker colors, the blacker colors are big. And what do you see? On the right hand end, near the emitters, it's a hash of all different colors because the numbers are random, but as we move across the array, the numbers get more and more laminar. And by the time we get to probably two thirds of the way through, something like that, the numbers are actually fairly well sorted. And now the sorters are making these very fine distinctions, 830,000 versus 840,000, and so on. So much so that once the numbers get to the left-hand side, when they get to the consumers, the consumers pull them off. So the consumers, what they do, they just look around them and if they see any data items, which I don't really see many here, they should be coming through every so often, they yank them out and they score them. Because they know the numbers are going from one to a million. So if the numbers are really random, there ought to be about 1 32nd of the numbers from one to a million here, and the next 30 second from here, and then all the way at the bottom like that. And if you do that, the consumer guys can work together and tell us how well the sorting is going. And the answer is, it's going okay. Is it perfect sorting algorithm? Absolutely not. These guys are wandering around. Whoops, uh-oh, let's get rid of the grid. Uh-oh, it's a little easier to see the stuff flowing if you look at it from a distance. There's Dreg in there. Dreg is erasing some of the data. Bad luck. That input does never appear in the output. Better get used to it. But it does quite well. It does pretty well. And in fact, it's not possible to solve this sorting problem perfectly. Because, you know, what if you get unlucky? And just the random numbers all happen to be big for a second. They're gonna get put to the wrong place. It's not its fault. But what this thing is, is unbelievably robust. We can flip a whole bunch of bits at random throughout the thing. Can't even see it. We can blow a big old hole in it somewhere. No problem. You see the data starts piling up at the leading edge of the thing, because there aren't any sorters to pull it through. But that's okay. They just, the Dreg defuse in, build res, the sorters defuse in behind it. The machine rebuilds itself. The machine is in a perpetual state of constructing itself. So, when damage occurs, it heals. Regular old computer. Totally different way to think about computation. The key, well, there's a bunch of keys. But one of the keys is we made this assumption of geometry. That input was left, output was right. Small was up, big was down. By making a geometric assumption on space, that allowed each of the individual sorters to have agency. I know how to make things better. This guy should go from here, he'd be better off there, and he'd be better off up, he'd be better off down. And then, we just need to have enough of them. And the job gets done. So this, it's not an algorithm. This process, I call the demon horde sort. It's like Maxwell's demon. Each little guy is making a decision. You get enough of them. You do computational work. And we've explored this in a lot of different contexts. You can use this not just for sorting, you could use this for routing packets, for example. If you could have knowledge about these guys wanna go this way, these guys wanna go that way, throw them into the grid. If you're worried about robustness, throw three or four copies in the grid, it's okay. A very different way to compute. All right, let's, so this is recovered fairly well. You can still see it's kind of a little, got a bit of a traffic jam up there. All right, I wanna show you one more thing. Let's go to the smaller world here for it, cause it's a little bit easier. Once you have this indefinitely scalable computing fabric where you can just plug it together, you can pick elements as you wish. Get the grid going. And furthermore, if you're a programmer, you can create new elements that have new properties that do new things that you wanna do. And I wanna show one other one that no one's seen this before. This has just been invented since our seminar started called XG, which stands for generalized crystal. And the idea is this. This diamond shape here is how big, when it's a time for a guy to go, this is how much he can see of the world around him. This is his neighborhood. It's four steps, Manhattan distance, city blocks in any direction, okay? Which is tiny from the point of view of eight gigabytes. But it's huge from the point of view of a cellular automata, which is sort of a thing that this kind of competes with if we know about that. So what we can do with generalized crystal is we can say, well, if you're a generalized crystal, you wanna be, you wanna see yourself. Okay, that's good, you wanna see, but let's say you wanna see a guy there and a guy there and a guy there and a guy there, okay? And if you see that, you're happy. And if you don't see that, let's make a guy. Whoops, there he is, we made a guy. And we'll let time start to run. If he's not that happy, because he would like to see four guys around and he doesn't see it. If there was resources there, he's empowered to crystallize them into more of him. So let's throw in a little res just for fun. All right, so there's a little res. And it builds more crystal. Now, if we go in here and we build a guy who's at a position, that's gonna piss him off. And in fact, what happened in that case was he recognized he was inconsistent with his surroundings and he decayed back to resource, which then got snapped up by another guy, okay? So if we make a bunch of these guys, they'll sort themselves out and they ended up decaying to resources when they were inconsistent, okay? And these guys, the res will eventually drift out to the edges and build the crystal, okay? That's very nice, it's fine. And the thing is, it's all incredibly general. You know, we could make a crystal that's got a bit of skew to it, for example, like this. So we'll make some of these. And now we get, he was having a little trouble there where he was kind of fighting against two different shifts for the crystal, but he sorted it out, okay? Very nice. These have a lot of the properties of crystals in a very crude sort of categorical sense. But this is a new digital media. We can do anything we want here. Suppose we did something like this. I wanna have a guy and I want there to be a guy right above me and that's it. Now, that's a problem. If we apply the rule to the one guy, we're gonna have to apply the rule to the guy above him as well and then he's gonna be unhappy because he wants a guy above him and not a guy below him. Crystal like this can't happen in nature in a single material, in a single element. What happens? Well, we make a guy, he's fine. Make another guy, what happened? The guy who was high, behind, decayed and got soaked up by the guys in front and now we get these guys stacking up and what they're doing is they're stacking up until they get as close as possible and without being able to see each other because the instant they come, this is called the event window, the instant one guy comes in, intrudes into the event window, that's inconsistent, that's terrible, somebody decays. But as long as they stay outside the event window, out of sight, everything's good. So we get these guys who head north. You can do this kind of thing in actual physical reality with doping crystals by making multiple layers of different stuff that have different properties. But here we can explore it directly. And I don't know if it'll happen here right now. Let's make a few more. One of the things that I didn't expect about this, we've had lots of surprises in the class already, is these guys, they act like a res pump. They gather all the resources up to the top of the screen because when you can see it, it's kind of sparse down at the bottom and it's very concentrated because when they're down at the bottom, they're far apart and they can soak them up to try to make crystal and then the crystal rises up and they get inconsistent with their neighbors, they decay back to res. It's an active dynamic res pump. Who knew? Try one more. Suppose we have a guy like this. Now this is symmetric, but it's inconsistent because once again, when this guy gets a turn, he wants someone above him and below him, he's not gonna have it. We get one guy, no problem. We get two guys, no problem. We get three guys, now they say, well no, I don't wanna be here, then he decays, someone else picks him up. And what we're getting is we're getting random walk, brownie in motion in one dimension. And it's fine. And one of the students in the class said, well, what happens if you make it in both directions? Do you get brownie in motion in two dimensions? And I said, well, we don't know. Let's try it, so we'll try something like that. And we make one guy, two guys, three guys. Kind of like brownie in motion in two dimensions. But this construction has an interesting additional property, let's put some res in here. When it runs into the res, it's gonna try to build itself out larger, but there's no consistent way to do it. And it reproduces. Every so often it'll leave a little pear there that'll get soaked up by others. Simplest, stupidest, one particular material. This is the thin edge of the wedge of a new world of computing possibilities. How are we gonna make this play iTunes songs? I have no idea. And you know what? I don't care. I wanna understand how we make the basics of robust computing happen in models like this. Okay, I've ran over my time, so let me finish up. This is called the Vickers Cross, this particular thing. All right, let me finish with this thought. I totally appreciate this scary, cool poster that the WID guys made. But it bothered me a little bit, the subtext, that security is about bad people. You can tell, he's got a hoodie. Ah, ah, ah, ah, right. And because I hope you can understand, having sat with me for an hour, that this is not where I think the blame for computer security lies. Yes, this guy is taking advantage, but we're not even trying our computers. One mistake, the CPU gets control, goes to somebody else, and they can do anything they want. So I tried to make up a response to this. It didn't come out too well, but maybe you can get the idea. It's sort of a cartoon. The fundamental security problem is not the people. It's not the programmers. It's the computer that never saw a con he didn't buy. We have to figure out how to make computers be savvy. And we can do it. We do it by beginning with distributed agency, pushing it out to the leaves so that there's no single place to take the machine over. Thank you so much for listening. Anybody dare? I'm just curious on your system there, what if somebody were to introduce a, send me your information crystal that would go around and that kind of thing. Sure, there's the vulnerability, the risk is always there. We can't eliminate the risk because we need to have it actually do work. Work for us can turn into work for somebody else. The way we mitigate the risk is we make a huge distinction between the periodic table of the elements. That's the actual code. And that we control very carefully. It's stored in non-volatile memory. You have to use extra separate channels of information to convey it. So if there's no element that will steal your information, then there's no way it can happen. If, on the other hand, you can take a bunch of existing little elements and put them together, well then you'd potentially have a problem. But what we then have is we have the fact that the whole thing is distributed. So unlike what we have today, where all you gotta do is take over the CPU and you got the keys to the kingdom, whatever it is that is gonna do bad to us is gonna have to fight a land war. Tile by tile by tile. First to get in and reach the data and then figure out how to ex-fill it. Given that we have this world that is inherently noisy, so the attacker cannot know exactly what it's gonna find inside, as opposed to the machines we have today. We've got one here. You mentioned how you don't see how this would play something like iTunes or files or... Perhaps I was exaggerating. But go ahead. What kinds of applications do you see as a real world purpose? In the near term, what this is best suited for is signal processing tasks, where you have continuous flow of data coming in and you have to make decisions about it. You have to reduce the data. You have to integrate it. And if you screw up a little bit, it's not the end of the world because the sensors are gonna be reporting data again immediately. And I hate to think about war-like applications, but it's a place where robustness is valuable like that. So I could imagine that sort of thing. Signal processing tasks, targeting tasks, that kind of stuff. The fantasy is that these tiles down the years, if you get a bullet through your computer, it'll start working a little less great. The targeting will be off by a 10th of a degree, something like that. But if it bugs you too much, you can go over to the radio and scoop a bunch of it stuff out and pack it into the targeting system and it'll fix it up like that. We'll see. It's the earliest days. Yeah, so it's a related question, actually, and it has to do with... So you're building up, as you say, you can somehow put the periodic table behind a firewall. Try to, yeah. But in the end, once you've explored the natural history of these devices, and you have an ontology that allows you to construct a proto-functional alphabet, then you're gonna have to interface with the human mind using a programming language that's familiar to them, and there's your vulnerability again. That question wasn't going... Didn't arrive where I thought it was going to arrive. Well, don't interpret it anywhere you like. All right, well, let me take it one way and then you tell me the other part. What we hope to do, in principle, is the fact that we can create new elements here is what we need to do research, because we don't know what the really powerful, effective, minimal set of elements would be. If in the future, which is where I thought you were going, we figured it out. You need 256 elements, there's 60 to do this, and 30 to do that, and blah, blah, blah, blah. Then we could start building tiles that are no longer reprogrammable. That all they can do is those 256 functions. And then the obvious hole, the functionality hole, is closed, the ability to do magic, to change the laws of physics, disappears from the universe at that point. In exchange, presumably those atoms are pretty powerful and you can compose them and do sort of things. So how does it work with the people? Yeah, in the sense that you're going to need to learn the grammar, to operate. You a person. You a person. So I'm thinking, and that presumably, by virtue of the constraints of our own mind, the sort of serial nature of our own thought, will impose a vulnerability in terms of the language that we use to interface with your hardware. All right, I think all of the stuff that I've shown you here is all two dimensional, right? And that bugs a lot of people, mathematicians, geometrists, stuff like that. They say the world is three dimensional and so forth. And I say, yeah, you're right. But in fact, for manufacturing systems, it's really nice to have the third dimension available to construct the thing, to maintain the thing, and so on. That said, we could take a model like this and start stacking them up. And I have this fantasy that we build a tablet computer, in effect, out of a few hundred layers of this stuff that has sensors on the bottom, processing layer after layer after layer, and little LEDs on top. And it looks like an unbelievable iPad. This kind of organic whole thing pointed at something. It's seeing it, processing it, displaying it to you. The top three layers, all they're doing is working to get themselves into configuration, to represent the letter E, so that when they light up, you'll see pictures. So the interface problem doesn't have to be any different than what we already have to people like that. And the hope is the machines are going to have to come to us. Certainly, the expectation is there now. One of the contrasts I thought you're drawing between old-fashioned computing and your style of computing was that, in the old ways, things were deterministic. But the systems you've described are every bit as deterministic as anything. They actually aren't. And even this stupid little simulator is actually non-deterministic. This is a nerdy answer. But because each of these tiles is being run by a different thread of execution. And thread scheduling, even in computers today, is non-deterministic. There's no guarantee which one is actually going to go next, which makes debugging this stuff really fun. So no, I like that. And again, these things have pseudo-random number generators built into them, which we believe are not really predictable. So it's almost as good as being non-deterministic. Really, the key part of serial determinism is that the programmer assumes from the primary program starts, it just builds more and more and more assumptions. That value is zero. This one's one. This is one bigger than it was before, and so on. And the layers of assumptions become teeteringly gigantic so much that if any bit flips, who knows? Because we build this system with noise at a very fundamental layer, you can't do that. You have to keep your chains of reasoning short. And that's what von Neumann was telling us we needed to do. So hopefully, it'll lead us right to the end to robustness. Dave, one final question here. So a lot of people think it's actually not just the deterministic nature and the stacking nature you were talking about, but the reliance on state in the computation. Go ahead. Again, not just determinism, but? The reliance on state. So setting memory values instead of relying on the functional approach programming where you have processes that complete with any given inputs. So I was wondering about how this relates to that approach to computer security? I have a perennial fight with one of my colleagues who's a huge functional programming fan and believes it's the answer to everything. And so I have to kind of needle him about how, well, a nice little monad you got there, buddy, you're sort of hiding a pile of state behind that, aren't you? I'm sorry if we're nerding out. The shorter answer is, state isn't going away. The question is, is number one, how much are you relying on? And number two, what happens if it's corrupted? So it just goes back to the plan B problem. If people, I mean, we could take machines that there's nothing wrong with a von Neumann machine. Here's my slogan. There's nothing wrong with a von Neumann machine that can't be fixed by making it be small and insignificant part of a bigger system. If we have tons and tons of von Neumann machines, then we're gonna have to be able to shoot them out and keep on going like that. So state is gonna be there. The question is, how much do you depend on it? All right, well, thank you all for coming. Please join me in thanking Dave. Thank you.