 It's Friday morning of the 2020 virtual a-life conference. I was supposed to make this video weeks ago if I made it at all, but I wasn't sure if I was going to make it. So if you're actually seeing it, it's of course because Juniper is unbelievably awesome. So the motivation is hardware determinism. The basis of traditional computing is unscalable and worse it's unsecurable. Traditional computing is a glass sandwich. It's robust, redundant and robust in the small and the large, the electronic circuits and the data center features, but it's efficient and fragile and unsecurable in the middle. And that is in chance. Efficiency implies fragility and the takeaway is that we as a society need to stop making excuses for how much the glass sandwich hurts to eat to a bitter and just stop eating it. Furthermore, the CPU and RAM model is philosophically and politically evil. It advocates centralization of control over ever great resources and the elimination of intermediate structures. There's you and there's Facebook and nothing in between. When we finally abandon hardware determinism, software will have to carry some of the load for reliability and that's a pain, but in exchange we are offered indefinite scalability, the ability to make machines of arbitrary size that we can even determine after the fact. And indefinitely scalable architecture must be distributed in space and we can view it and design it as some kind of wild cellular automata full of the sort of daily dollar short dynamics that is typical not of the game but the reality of life. Without centralized totalitarianism, we must have other mechanisms for structuring the physical resources of the machine across space and time to allow us to compose more complex machines out of simpler ones. And in previous years at these ALF conferences, I've shown digital proto cell membranes as down payments as steps towards such space time structuring and my heart is still with cells and membranes. But this year, I also asked myself, what is the simplest thing that could possibly work getting a little impatient and how can we turn the challenges into opportunities with indefinite scalability? And this turned out to be a liberating process for me. You know, at first everybody hates the fact that sites have no global addresses, that how can I compute if I don't know where I am? But what I realized was that means that objects in those sites can be moved transparently. As long as anything that they can see that they care about moves with them, they have absolutely no way that they were even to know if they were even moved. Same thing, processes have no synchronization, that's a pain. But that means processes can be delayed transparently by higher priority processes coming through. And that's what this paper is about. So here's a case where we just have a bunch of little individual zero D data atoms, and they're just looking at their immediate neighbors and swapping to try to get more clumpy, get the colors to plop together. They don't really try that hard, because I wanted them to keep moving. And so here's the, this is the swap line that I talked to this about this a lot on Monday, a one D structure that self synchronizes to keep from tearing. And it can also be used to perform large object motion. When you pass a swap line over some space, everything that it passes over moves one step the other way. So that I demonstrated several years ago. And I thought that was cool. And large object motion was something that was a challenge to achieve. So it was kind of nice to do it. But what I didn't realize until this year was that it also made a handy sort of basalized MapReduce operation. You could pass a thread across a region of space, accumulating information about it, and then fold the thread down to a single atom to get the final answer. So here's a case where we emit a thread that counts up the number of atoms of a certain type. It bounces off a terminator that causes it to flip directions. And then it comes back to where it started and the answer is available. And with the possibility of having separate priorities, so that you could say, okay, now this line is coming through when this swap line comes through the other swap line should defer, then we can start to have composition of larger scales and smaller scales together. And this is the thing that I just whipped up yesterday. It wasn't actually in the paper, the paper talked about hoping we could do it. Here it is. So we've got a MapReduce operation running that's performing repeated censuses of a bunch of data, which is sorting itself while simultaneously the lower priority process is passing swap lines through it. So the entire computation is moving. This is pretty cool. This is progress. It's not quite as organic looking. But I can see how to make this do stuff. That's what the paper is about. I hope you'll check it out.