 The first speaker is Dave Aftely from the University of New York, New York. Indefinite skill of all computing equals artificial life insurance. Thank you. Thanks for coming. I feel like I've been away from artificial life for a few years, and I'm happy to be back now. I remember at A Life 2, Jean Spafford gave a paper on our computer virus as a form of artificial life. And at the time, I thought that was silly. I was doing evolution and learning systems put together. I now feel after 25 years that, in fact, he was sort of just right. And the real issue is not our systems out in the real world, actual living systems, actual artificial living systems. They absolutely are. And the issue is why is it that the real artificial living systems in the world being used for evil and not for good? And we need to understand what the bad guys already understand. So I guess my one-word takeaway is we need more synthesis out of a conference on synthesis and simulation. We need to be building artificial living systems, not just in robots. Robots are great. And thank you for coming here. We've got great robots hanging, happening down the hall, not just in soapy water, but in bits. We need to be building, synthesizing, engineering living systems and digital systems as well. All right. Here is the talk in one slide. We are reaching in Naomi Leonard's terms. It's time for the honey bees to find a new home. Traditional computing based on deterministic serial processing has been revolutionary for 60 years. And it's just about tapped out. Clock speed increases have stalled. We're now going from a central processing unit to a multi-core, which is essentially going from a dictatorship to a troika or perhaps an oligarchy, but it's not moving to a sort of bottom-up, democratic kind of governance system that we could use to build computing in the large. There's another approach to computation. Instead of determining every step perfectly, instead of saying it must produce this or else not computation didn't even happen, the alternative is statistical parallelism where we have tons and tons of things happening at once. We do not try to control the action of every individual little step. We control them in the aggregate. We control them based on the law of large numbers. This is the same thing that is being done with electrons at the level of digital hardware. And that's why digital hardware is so alive because it's exploring the law of large numbers to get voltages to act the way we want. But then above that, somehow we think we have 100% reliability we can do anything we want. It's not true. We have to carry statistical parallelism up the stack. Once we do that, we can have computational systems that rather than displaying the incredible fragility of the computer systems that we have today, one bug and your system is owned. One fault and your system is crashed. Instead of that, we will offer systems not to provide guaranteed security, there are no guarantees of security, but will provide a basic level of sanity, not a level of curing cancer, but a level of washing your hands between patients. That's what we need from computation today. When we do build systems this large, when we start building systems at the level of parallelism that we're controlling the computations that we're performing statistically, rather than in detail, it's going to turn out that the architectural design principles that we need are artificial life. We are going to be having computations that will automatically reproduce themselves for redundancy and for performance improvement. They will reproduce competitively with other tasks using the same hardware. Us, the people in this room, the people studying artificial life can and should own that. The future of computer architecture is artificial life and it's time to get started. The honey bees are going to move. We need to send out the stouts now. That's my takeaway message. Now, it's a methodology session, so I'm going to talk methodology. I've already blown five minutes. I want to save at least eight or ten minutes for a demo because that's really what I want to do. But let's talk methodology for a minute. What I want to propose to you is that this idea of indefinite scalability is, on the one hand, it's a very sort of picky, oh, you didn't really mean that, yes, I really meant that thing. But on the other hand, it's a useful design razor. It's a useful principle for understanding the limitations in architectural designs. I want to talk about that. I'll talk about our little contribution as an example of an indefinitely scalable computer architecture based on artificial life principles called a movable space machine that I'm going to do a demo. This is a slide that is hopefully mostly preaching to the converted here, sort of talking about the two big styles of computation, where the artificial life side is on the right-hand side and sort of traditional architecture and computer science is on the left-hand side. People say the first computation to run is Hello World Program. The Hello World Program should be called the High World Program. The very first thing it does is die. A Hello World Program would live forever saying, hi, how are ya? That's what processes do and algorithms don't do. So the living system approach, the emerging bottom-up, evolution and so forth approach is the right-hand column, which is where we want to be. The reason I show this is because if we look in detail at many of our models, many of our artificial life software-based models, they ought to be indefinitely scalable, but they're not. They're not because of kind of stupid reasons that we should work on fixing. So here's the game. Imagine this is a thought experiment, just come along with me. Suppose you're given an indefinite supply of space and power and cooling and money for hardware. It's like DARPA, maybe even NIH, I mean big money. As much as you need. You've got space, power cooling and money. Your job is to invent a computer architecture, a tile of hardware that you can stack together and make a computer as big as you want and never have to stop and say, whoops, 640K. That's the goal. Be able to stack as much hardware as you want and never hit an internal engineering limit. So the only limit you run into is they run out of money, they run out of space, they run out of power and cooling. That's the game. And the only requirement, the only hard requirement is that each of these hardware tiles has to be fixed size, constant mass. Your tile is not allowed to grow with the size of the system, okay? That's it. That's the thought experiment. Imagine making a computer like that. When I ask people this, they think of a number of things and say, oh, this is obvious. And so just to drive home that it's not completely obvious, it's supercomputers. Supercomputers, not even close. In general, the supercomputer, in fact, has been scaled about as far as it can. That's what makes it super. And if you wanted to make one 20% bigger, you'd have to start over engineering. The internet, internet indefinitely scalable, infinitely large. Why not? What's wrong with the internet? Anybody know? Bandwidth? Well, at least if we build more hardware, we get more bandwidth. What are we saying? Addressing. We have IPv4 addresses, 400 billion addresses, whoever needs more, of course, we're always running out of them. Everybody says, oh, well, IPv6 has so many addresses, we could never run out. And that's exactly the point. It doesn't matter. For the game of indefinite scalability, any finite limit on address space is unacceptable. Touring machines, cellular automata, and that's the one that bugs me. Cellular automata, artificial life, people are doing it all over the place. But most cellular automata models are not indefinitely scalable. The most number one reason is they presume a synchronous clock. They presume you can step the entire system all at once. That's not indefinitely scalable. So we start to think about this and we say, wow, ACK is really strict about this strict indefinite scalability. Yeah, that's the idea. Well, why? I mean, the universe might be finite. Why do we need something that could go beyond that? And here's the reason. I made a slogan. Every scaling limit implies a limit at every scale. Okay? If you look at a model and you find its scalability limits, each of those limits you can then turn around and find a corresponding design problem in the finite model, in the model as you're looking at it. Every scaling limit implies a limit at every scale. And here's a few examples. We assume perfect reliability. When we assume perfect reliability, there's no reason to add fault tolerance. There's no reason to add robustness. Well, robustness requires redundancy. Redundancy is inefficient. With perfect reliability, why do it? Presuming perfect reliability means we have catastrophic response if failures do occur. Synchronous clocking means we can't actually have any input output. The entire system is this one island of and because we're assuming synchronous clocking, we cannot talk outside that system because it's not synchronized to us. That's actually a problem we have with computers today. What we settle for 10 to the 9 chance of error, one 10. Totally unique node names, free communications, that's another one that happens a lot in artificial life models. Every scaling limit implies a limit at every scale. So it's a useful thing to understand the design limitations of any model. The takeaway message for artificial life is we can simulate whatever we want. Strictly speaking, if you want to do Conway's life, that's great. We can learn lots of things. The problem is when you offer up a model and say, this is just a simulation. This is just for science or pedagogy. Well, then someone else could turn around and say, well, I want to do synthesis. I want to build a machine. I think it would be cool. And they're going to want to look at your model and try to use it. And then these things are going to come and chew on our bus. We need to look for A-Life models that are indefinitely scalable to get the benefit of all of the things bottom up, emergence, robustness that we've been saying artificial life provides. The model that we've got is called the movable feast. I'm not going to go through this in detail. The slide's a little bit old. There's our little hardware tile. That's eight of them or whatever. That's the version one hardware that has some problems. This is what you learn by actually building hardware. We're hoping to build second generation hardware soon. We need a little more money. The way we can understand this is an object-oriented artificial chemistry. It's an asynchronous cellular automata with huge, huge neighborhoods from the point of view of cellular automata but teeny tiny memory from the point of view of conventional programmers like that. We're trying to piss off everybody equally. From the point of view of cellular automata, it's a huge neighborhood. From the point of view of a programmer, it's incredibly tight. All right. So that's an example, but why don't we try to do a demo instead? Let's see what happens. All right. Well, we might or might not be able to see anything here. This is a... I'll tell you what we'll turn. Here's a field of tiles, simulated tiles. Where's my... I lost my control key. I don't know what it is. All right. Let's take an element. I'll put an element called drag in here. We'll put a field of them in there and we'll let it run. What's happening is they're diffusing around. Man, they're diffusing incredibly slowly. What drag does is it looks around at one neighbor and if it sees an empty spot, maybe it creates a resource there. If it sees an occupied spot, maybe it erases it. And the net effect over time is the drag dynamically regulates the empty spaces in the system that it sees. And here's a version that's been running for a little while. It's now full of about one-third occupied sites, two-thirds empty spaces. When we have a system like this, parallel systems, Naomi talked about this as well, how do you break deadlocks? In traditional system design, deadlocks are a huge problem for parallel systems. With a robust first approach, we break deadlocks by breaking deadlocks. We randomly erase stuff. We don't ask, is this part of an important computation? Oh, you are immune. We erase stuff with no regard for what it's doing, and the computation has to be robust to that. If we're requiring robustness at all levels, that gives us the freedom to have a little slop at all levels as well. So once we have a world like this that gets composed mostly, like 6% drag. Let's move on with this keyboard. The keyboard is not doing the keyboard. Without the keyboard, that's just the mouse. Nice. Well, I can do that. Let's pick another element. Pick a sorter element. We'll drop one of them in here. Did I get rid of the guy? Oh, but now I can't do it. We'll just move on to the next one. We've got another one here. What we've done here is we've used the drag and res, the churning mass of stuff. We've incorporated elements in here that are designed to be sorters. They look to their right. So the red atoms are the sorters. The blue atoms are data. If we get close enough, there we are. Actually, we'll see it. So each of these little SR guys is a sorter. Each of those blue guys is data. It looks to its right, takes one, says are you bigger or smaller than my threshold, and throws it above it or below it and to the other side. You get enough of those guys together. You put data in on one side. It starts out random. By the time it gets to the other side, it's pretty well sorted. And that's what this does. You can also see some DRs. Those are the dregs. You can no longer see any res, the resources, because the sorters have consumed all of them to turn them into sorters. This is not an accurate sort. It often makes small errors. It doesn't get things quite right. But it's incredibly robust. We can blow holes. There we go. We can blow holes in the thing. And, well, what happens? The system rebuilds itself. There we go. Okay. All right. Now, I want to show one more thing. It's a period of a cellular automata, if you think about it, from a physical point of view. It's a driven media. You can make a rule that starts out with one lit cell and ends up with a world full of lit cells. That doesn't require any more food, any more power than doing any other rule. It's a driven media, which makes it unbiological in one point of view. But it also makes it very handy. And that means that we have a design problem, which is what happens if we get a fork bomb, an element that just reproduces as rapidly as it can? What happened to our sorter? Our sorter just got wiped out. With a fork bomb loosing the universe, there's basically nothing that you can fight it with, except possibly another fork bomb. We've got red fork bombs and blue fork bombs. It's a lot like the voter model with the neighborhoods and so forth. And the dynamics of this are complicated and long-term transients like that. My point is this. As we design these systems, as we try to build intentional computations with them, we have to engineer with this stuff. We have to recognize that this stuff is there. And here's the thing. Each of those guys, when it has an event, it's reproducing itself 12 times. It grows incredibly rapidly. But if we can have something else that can outrace it, that can run a little bit faster. And we've got an element like that. It's called AF, our anti-fork bomb. And what that does is just what you would think. When it sees a fork bomb in the world, it responds by being a fork bomb. But as soon as the world is clear to them, it dies back. So you can't see it right now. But right now the universe is full of 6% anti-fork bombs. So now if we try to come in and throw one of these guys, the system responds to it and then cleans up again. We can now build the same sorting network that we did before. Drop in six, actually, we only have to drop in one anti-fork bomb and it populates itself. And we can now defend the system that way. This is what I need you guys to help me with. We need to be understanding the dynamics of making intentional computations in indefinitely scalable systems. Thanks so much for listening.