 Our first speaker is David Ackley in Definite Scalability for Living Computations. Can you hear me in the back? Okay, great. I'd rather bellow than put on the microphone. I guess the first AI conference I went to was AAAI 82. This is the second national conference on AI in Pittsburgh. This is quite different here. I'm thrilled to be here. A lot of things have changed. We were all excited about the Motorola 68000 running at 10 whole megahertz. But a lot of things haven't changed. In particular, we're still programming the same kind of machines, the same kind of ways that we did at the time. And I think we're going to need to change that sooner rather than later. I can't tell if this is on or not. Here we go. Okay, something. So if you want to follow along, the paper is at anime.us slash bloop. Other than that, I'm just going to rush through and try to get to as much as I can in the time that we have. History says I run rather long, so it would be wise to start at the end. So I start with the conclusions. Here they are for scalable systems, for computer systems that can grow much larger than the systems we have today. And for systems that can make a credible case for computer security, both the regular computers we have today and the terrifying internet of unsecurable things that is coming now, we need society to push us in the computing industries to focus on robustness. And I suggest to you, this is the blue sky idea, that to get to serious robustness, we're going to have to give up on hardware determinism. We're going to have to give up on saying same input, same output. If that isn't terrifying, you don't program enough. But the suggestion is the happy news that I hope to leave you with is that there is life after hardware determinism. It just means that software has to carry some of the load of responsibility that we have been saying reliability is a hardware problem. It is no longer. And the suggestion is that we focus on this idea of best effort computing. We're going to be admitting that our programs may give the wrong answer. Our programs may be incorrect. But you'll still prefer that to something that pretends to be correct and fails catastrophically when you blink at it, let alone when malware gets to it and other attacks. Other, to get to that stage where we have to take this bitter pill of saying software which has focused only on correctness and efficiency for 50 years and is now going to have to actually focus on reliability in effective use of redundancy and something more than just blind efficiency. We can drive that by saying we need computer architectures that are indefinitely scalable. We need computer architectures that we can just plug in and more and more and more together and it will be so big that parts that will always be failing. It will be so big that we will be using it before we have finished building it. To program on that level of architecture, you have to face issues of robustness. You have to face issues of asynchronous interactions. You have to face all the issues of races and the stuff that we tend to avoid in small systems. And linking to Tom Dietrich's great presidential talk yesterday where we have to deal with robust AI facing the unknowns, I want to suggest to you that this approach indefinite scalability is going to suggest a way that we can actually bound the unknownness of the unknown, which from one point of view seems obviously impossible, but which from another point of view is completely trivial and we'll hope to get to it. Alright, so here was Tom's talk yesterday. I couldn't agree more. The need for robust AI, yes, absolutely. High stakes applications where we are giving increasing responsibility to close loop interactions by systems that are software and programming and that are just as crappy as all the other software and programming that we now only make is driving the need for something new. He suggests lessons from biology is one of the responses to it. That's what I want to concentrate on that we're actually using in the approach that I'm doing. So this is a blue sky talk. The blue sky is that we will soon be ready to embrace moving beyond determinism and programming, but that doesn't mean they haven't done any work. We've been working on this for several years, and so I want to start with a demo to show you sort of a little bit of what we've got. This is a simulator for an indefinitely scalable computer architecture. This is actually four tiles. It's a little hard to see, but they're that are cooperatively connected at the edges. And what I would like to do is do a quick demo and also have it be sort of a metaphor for technology and human development. Okay, see if I can do both at once. So what I want to do first, whoops, is I'm trying to draw a square. I'm trying to draw a box. All right, so there's a box. It is a great box. The important point about this box is it has an inside and an outside. So when you can differentiate space and put something inside it, then never do something different conceivably than it's outside. Okay, in the metaphor, I suggest to you this is human technology from zero to the industrial revolution, making stuff by hand to build tools and store things and do things. Okay, and that's great. And it worked fine. We did great. The industrial revolution took us to a whole other level where we could automate processes that had been done manually and now we get much higher quality, much lower cost, and we can knock out as many boxes as we like. And this is the industrial revolution. I salute you. Now, it's still true that once we've made these things, if we get our eraser tool up here to kind of simulate the passage of time, you know, they sort of all gradually suffer this length and hours of fortune and they gradually fall apart, but that's okay. They're easy and cheap to make, but we're great. What I suggest to you is we are moving into a post-industrial manufacturing framework that we're just beginning to understand. And here's my metaphor for it. Whoops, we can get that pencil back. This guy, you can get the eraser going again. He doesn't really erase very well. You can get a bigger gun here. So you can see what's happening. This thing is not just a fire and forget build it once box. This is a thing that knows it wants to be a box. It knows what it means to be a box and it heals and repairs itself as it goes along. This is, I submit to you, a living computation in a fundamentally real sense, more so than most of the systems that we have built today. Okay, so I can actually forget this. These guys are hard to kill. Give me a bigger plug. Okay, I killed one. You get the idea. So the suggestion is how does this sort of thing work? How do you get from a passive do something once to an automatic self-healing, self-aware, self-aware, maybe that's too strong, but in a very limited way, a very small way, each individual piece of this thing knows that, well, so this is a source code for not a whole box, but just a line, a 16-segment line. And it has three, well, it has several key aspects of it. Of course I want to focus on is this thing, MPaZ, it's a data member, it's a four-bit unsigned value that says where I am in the line segment. And when it's my turn to do something, my behavior function gets called automatically and I check to see is my number the smallest number. No, is the guy's space west of me empty? Yes, reproduce. These three lines completely crumble, make a copy of myself, decrement the pointer, the position in the copy and store to my west is how the healing and the constructing works. Same thing, if I'm not at the maximum position and my west is available, make it there, all right? This is a program in the language Ulan, which we have developed specifically to study how to program in this indefinitely scalable living computation, robust first approach. It's got packages for Ubuntu 1204 and 14.04 you can install and play with today if you're willing to install from a personal practice archive. So we have our metaphor. I suggest that the future is these living systems that take care of themselves. How can we make a box that takes care of itself? We have to give information about the global needs of the computation down and down and down and down until individual guys can do something to make things better. And in this case, he can fill empty spots that should be part of the box, all right? And that's the essential difference between traditional top-down serial deterministic computation where everything gets done with no purpose in mind, it gets done efficiently, it gets done once, relying on the perfect stability of RAM, relying on the fact that memory once written can be relied on forever. Living systems don't do that. They rebuild themselves continually as needed, and we can afford to do that. We can take this sort of stuff and then we can make it much more efficient once we know what we want. We're at a new research era to figure out how to do stuff with this and not have the whole thing melt down with fork bombs and crazy cans. All right, so in the remaining six and a half minutes, I would like to try to paint a picture of where we're going to run and where we're going to do a little bit more and then circle back around to one more little demo to finish up, okay? I showed this slide pretty much in all of my talks. I'm not going to go through it in detail. The point for us today is just that there are two very different approaches to doing computation, and the one that we have spent most of our time, although AI, more than many disciplines within computing has made some forays into the other column, are still very heavily on the finite scalability algorithm begin and end, where in order to even talk about it, it's supposed to be correct, and then once it's correct, the goal is to be as efficient as possible and as robust as necessary. Robust is the last thing that you do. If it actually crashes, you go back and robustify it until it doesn't, then you ship, or you already shipped, and then you fix it later. The alternate approach is rather than saying the goal is to finish as quickly as possible, the goal is to survive no matter what. To have ongoing processing where robustness, the ability to give some kind of answer and to keep on processing is goal number one. And then, yes, given that we're going to stay alive and do something, we're going to try to be correct. But we're not going to pretend that correctness isn't a misability requirement. I mean, what is correctness for a Google search? Most software today cannot be correct or incorrect. It hasn't got a spec. So we're kind of living in the emperor's new clothes, but we've been here for a long time. All of these other things, the fact that it's not going to be centralized, and rather than doing logical inferences to think of what's happening, well, if x is greater or greater, we're going to be doing statistical inferences, saying the odds of that this thing going this way are less than er, as long as they ignore it, and so on. And then fundamentally, the response to error is completely different. Tom Dietrich's talk mentioned mincey saying how programs will typically crash if they get anything unexpected as if this was surprising. But no, I mean, we designed machines to do that. That's the whole point, because we have given not a second thought to what the program should do if the if statement goes a long way. What would we even begin to say? Whereas in the indefinite scalability thing, we're going to hide errors and heal as best we can. And it leads to a very different view, the sort of master of the universe. You do what you're told, don't ask why, versus everybody is empowered to try to make the world a better place. That is the robust first approach. You have to figure out how to take a complex global problem and break it down into tiny little consequences so that a locally situated agent looking at what it is can make a just case for saying, well, I'm a part of a sorting guy. We all agree that big numbers go up and small numbers go down. This number is bigger than the one I just saw. I'm going to move it up. And it just doesn't make the world a better place and then get enough people, enough agents, enough things, enough sites in there and the job gets done. That's the approach to computation. All right. Oh yeah. And then this is the advocacy part of the talk. We have been mining the traditional serial deterministic tracker for 60 years and it's been great. It's also been terrible. And there is a fundamental sense in which serial determinism where you say, I'm going to be all efficient, but never do anything twice. That makes no sense. The answer has to come out the same way. It's fundamentally unsecureable. It's not that computer security is just we haven't focused on it yet. Because we designed computers to be universally programmable, that's great. That also means they are completely unpredictable when one error occurs. If we can adversarily choose a single bit flip, we can take a machine that's doing anything and make it do anything else. And then we're putting it in control of the flabberons on the 787 and so forth. We're in crazy land, taking machines like this, deterministic machines that have no clue, the independent components that have no clue what they're doing and giving them an increasing ballistic, financial, chemical responsibilities in the real world. Okay. Almost had it done. All right. I already ranted about this. This is more about our particular approach to indefinite scalability. The idea is you make a hardware tile. How do you make a computer that's indefinitely big? You make a tile that can be plugged into copies of itself and they're completely fungible and you make as many as you can afford. And that's what we've done. There's a picture of the first round hardware that we did back in 2009, hoping to do a second round hardware this coming year, if things come together. The fact that these are independent tiles with independent CPUs means that there's no global clock. There cannot be any global clock. These tiles are racing against each other. When one of them finishes an event, this starts another event. That means it might get two before the other guy gets one. Deal with it. This is the ground rules for indefinite scalability. Okay. And let's get that stuff. And let's go back. So now the idea is, you know, I give this talk and people, you know, nod. Well, yeah, maybe, but no, no, I'm not going to work on that. And I understand it's a tough thing to swallow. But you can do things with it that are really not too bad. And I will stop and I'll put this up here and then I'll stop and take a couple of questions because really we're out of time. This is a toy four port data switch. It's got colors, red, green, yellow, and blue on the side, which are the ports that are injecting data cells, each of which is carrying a four byte packet that is destined for a random other site. At the moment, they're just vibrating around, diffusing and going basically nowhere. But there's a there's a routing grid, the dark dots that you may be able to see that is growing spontaneously as part of the machine's operation that is now gradually saying when it sees a port, it starts gossiping to its neighbors about distances, the red port is this way, the green port is that way. And then the data cells use the information in the gradients on the routers to get where they're going. And as this thing finishes up, it'll gradually clear out all this backup data log and process fairly well. And it'll do so in this ridiculously robust way. I mean, we can we can blow giant holes in this thing. And will it survive? Sure, it'll survive. Why will it survive? It will survive because it just rebuilds itself the same thing it did at the start. So amount of time, the goal is robust systems. I hope I piqued your interest. There's much more stuff on the web. Thank you very much.