 I'm Dave Ackley. I come out of artificial life, machine learning, and computer security, and all of that, especially computer security, pushed me into focusing on sort of how are we going to fix the computer mess that we've gotten ourselves into as far as computer security and, you know, what are we going to do when cash coherence runs out? So for the last seven years or so, I've been looking at non-Van Neumann, post-Van Neumann computer architectures, and we've gotten some, programming is a first-class issue on top of this architecture, so I wanted to put some of these ideas before the future programming folks to seek allies and kindred spirits, or just to sharpen the pitch. So when I think of the future of programming, to me there's really just one distinction to make, and that is, does the programming still assume deterministic execution, same inputs, same outputs, or not? It seems clear to me that determinism of that style is a property of small computations, and what we've been doing so far are small computations by that view, and the model is under stress, in fact. So when I think of the future of programming, maybe not next week or next month, but in not that many years, mainly, it's going to be the case that programming is going to have to be taking place on top of non-deterministic systems, and that's what we've been exploring, and it's a very different space, but I can report that, you know, the ideas sort of of objects and modules and isolation and communication and narrow interfaces and all of this stuff recapitulates in this alternate attractor of computation, but it manifests itself in different forms, and it's worth paying attention to in deterministic execution in the CPU and RAM model, and the instant that we take these beautiful isolated modular objects and we reify them in RAM, their objects' isolation and modularity is an illusion that lasts only until the first buffer overflow or wild pointer access or unversioned symbol and so forth. So what we're looking for is a new deal between hardware and software, where instead of saying hardware provides 100% reliability and software gets to assume that forever, hardware is only going to provide best effort reliability and software is going to have to deal with that, which sounds like no fun, and it is more work from one point of view, but in exchange the hardware is going to be indefinitely scalable. We can just plug more of it in and more of it in as far as we want to go and plot down a power plant every so often or plate the whole thing with solar cells, something like that to get it to run and software itself becomes best effort as well. It doesn't have to pretend truly or rightly or wrongly that it's going to produce a guaranteed correct solution. It's going to try as hard as possible, everybody's going to try as hard as possible, and that's all living systems could ever do and defined manufactured digital computation systems are going to move more and more into that direction. That's the thrust of living computation. So the model that we've been using is called the movable feast machine. It's kind of like a stochastic asynchronous cellular automata on steroids. You can see some of the first generation hardware that we did back in 2008 that you could actually sort of plug together and it worked great. We learned a lot. It has sort of a fundamental design flaw actually that you can sort of see in that picture if you know what to look for, but we need to build another generation hardware. We'll do that soon. But the general idea is we have each of these tiles has a patch of sort of cellular automata array and each of those sites can hold one atom. An atom is the model of an object in our object oriented system and all objects are the same size. Each site holding an atom is the same size and we move by copying and swapping them. Objects are all the same size and they're all tiny. They're more the size of a word rather than the sort of blown out kind of objects we sort of routinely make in serial deterministic programming. The object is sitting someplace in the grid. One of the questions people ask about ULAM is how do you find the absolute address of a location in the grid? And the answer is you can't. Grid locations don't have absolute addresses. Every location thinks it's 00 and the world revolves around it. In addition, so there's the two-dimensional array indexes that you can see here. There's also a one-dimensional array indexing that's often easier to use, just kind of scanning outwards from the center. These 41 sites, each of which is a 96-bit atom, is it. That's all of the persistent state and it's only persistent if assuming nobody else writes into it by the time you go away and come back. So the programming task is writing state transitions for these little event windows. And all of the more complex stuff that we would think of doing, building large systems, is going to happen by the interaction of many atoms, many instances of classes, sharing space, and interacting via overlapping event windows. So let's do an example. Here is element ray. It's about the most simple thing I could think of. Element is the analog to a class. Atom is the analog to an object. The important point here is the behave method gets called automatically by the engine when an atom of type ray has been selected to have an event, meaning it's at the center of the event window and it has the responsibility to do whatever it wants to the event window until it gives up control. So we can compile this. It takes a little while. Well, GCC is going to be running under the hood to take our output and turn it into machine code. It produces a .mfz file by default. And here we are. So this is a simulation of four tiles. That's what these things look like, these gray things. And in fact, even in the simulation, each of these tiles is running under a separate thread. Even the simulation is not actually deterministic, in addition to having pseudo-random numbers on each tile and so forth. So here's our ray that we just looked at. We plop down an atom of ray and we run the thing and it goes west. It doesn't just go west. It's really serious about wanting to go west. If we try to erase some of it, it's not like it's given up and it's finished. This is a continuous process of westness that it does. We can beat it because it doesn't have any defense events coming from the east. But we really would want to do something more than just head west forever. Let's take a look at another example. Here's a line that is going to head west and then stop. Actually it heads east and then stops. And the way it works is it has a data member that keeps track of what position this particular atom is in the overall line. And when it's turned to behave, if it's not at the minimum index, it looks to west and says, you know, if that's empty, make a guy there. Make a line there with index one less than mine. Same thing to the east one greater than mine. And I've got these guys already loaded up here. Whereas the line here it is. So if we put down one of these line guys and let's make a couple of them, you know, there it goes. And again, these guys will repair themselves actually and they'll repair themselves from either direction because that's what they do. And now what we want to do is build increased spatial complexity, spatial and functional complexity. So by building more complex structures out of line. So for example, we could build a box. Let's get our pencil back a box and it'll have the same kind of properties that all these other guys do of just rebuilding themselves. Now we have an inside and an outside. We can start differentiating uses of space in any way that we particularly care for. Now there's an interesting point about the box. Since all atoms are the same size, we can't actually put an atom inside an atom. So even though we would have wanted to reuse those lines as the lines of our box, we can't do it directly. So what in fact we do is we take the behavior of a line, the design of a line and we squish it down to a subatomic particle we call a quark. And the quark in this case it's actually a quark template that we can specify how many bits out of our tiny little bit budget we want to use to represent our position in this line. And otherwise the code is very similar. And so now we can build a complex to look at the box that we just looked at. So the box uses a quark as a data member in this case. It's a 4-bit quark. It makes a 16 line. And then the only other trick it has an additional data member representing the symmetry that are going to be used for the coordinate axes. So we have west is west and north is north, but that's only the default symmetry. We can impose a 90-degree rotation and then west will be north and north will be east. And if we run this exact same line code the box, the line will go down. And that's how the box works. In fact it's four lines each of which thinks they're going in the same direction. Once we have spaces inside and outside we can make more complex structures. And this is the idea of how we're going to do composition. Composition of more complex structures involves three steps. It's a logical construction like putting quarks inside of other quarks and inside of elements. And it's a spatial composition that we have to decide where we're going to put these things. And then it's a temporal composition where we have certain things happening before other things are happening at a higher rate of speed so that we can get dynamics to start layering out and we can start reasoning about the fast stuff, imagining the slower stuff is constant even though it's not really. So I put all these things together and sort of skip a few steps in the interest of saving time. And I built a box and made a little toy data switch with it. So the blue line is a box like we just saw. It's a 64 on the side. And it plates itself with walls, one layer on the inside, two layers on the outside. And that serves to provide isolation because the event window is radius four in any direction. So with a four thick line, something on the inside can't actually reach the outside and vice versa. In the inside walls we have these little port guys, yellow, red, blue and green, blue and green that emit data cells each cell. And those are the things that are just sort of bouncing around in there. Each cell has a 32-bit payload and 8-bit sequence number. This switch knows nothing about packet reassembly or anything like that. It's just to get an individual cell from the source port to the destination port like that. Now by diffusion they get there almost never. They sort of take the square root of forever to get there. But you can see the dark dots sort of spreading through the grid. And that is a routing grid that is building itself that was seeded by the box itself. And when a routing grid sees a port it notes that fact and then begins gossiping to its neighbors about how far it is to the various ports. And as you can see, as the grid starts to establish itself, the cells start heading quite directly. So we got the yellow guys are heading out. The red is seeming to be the last one to settle up because the routing grid starts in the lower right and doesn't reach the upper left last. And so now we're doing pretty well. We could actually even up the data rate a little bit if we wanted to. Something like that. And again this is, you know, it's fundamentally robust. It's robust by design because this thing built itself and it's continually trying to rebuild itself if we do something mean to it like that. That's pretty good. That's going to take a while to recover from. But it's totally going to recover because that's what it does. I mean, you know, at this point we could even do something really crazy and like build a tiny little port, a little switch inside the big one. And this will actually work fine because the routing grid will route the packets where the cells where they need to go. Running out of time. There is this alternate approach to computation that I call the robust first attractor where living systems really are located. I mean, it's when we write, we do computing stuff. It's really hard to avoid using life like analogies. The thing died and it's on and so forth because there is this intimate connection, but it's really rich. In this other approach, when we say reliability is everybody's problem, hardware's job is indefinite scalability. Software's job and hardware's job is best effort. I hope you will join me in working to build the robust first body of knowledge. In any case, thanks for watching.