 All right, let's go ahead and get started. So we're very happy to welcome Dave Ackley back to Tuft as graduating in 1979 as a undergraduate. Was Cohen even here when you? Cohen was here? So he's been in Cohen as an undergrad. He's now coming back as a professor at the University of New Mexico. Talking to us about a fascinating topic, an intersection of biology and cognitive science and computer science. So I'm super looking forward to this. I will let Enoch introduce him on behalf of Daniel Dennett, who was going to introduce David and we invited David, but who unfortunately cannot be here extended to breath because he fell ill. Anyway, so, Enoch. Okay. Hi, I'm Enoch Lambert. I'm a postdoc with the Center for Cognitive Studies, and I'm pleased to introduce Professor Ackley on behalf of Professor Dennett. So let me say just a couple of words. Professor Dennett says, sends his regards. As Ani said, he's, he's ill today and very much regrets not being able to be here. Now, although Professor Ackley is a Tufts out alum, Professor Dennett only first met him. He says at the Santa Fe Institute where, in Dan's own words, they spent a delicious day brainstorming computer models of the evolution of communication with John Maynard Smith. The great evolutionary theorist. And he's enjoyed spending time with Professor Ackley at the Santa Fe Institute ever since. Now, with degrees both from here and Carnegie Mellon, Professor Ackley has spent his career connecting life and computation, contributing important research on neural networks and machine learning, including a foundational paper with Jeff Hinton and Terry Sunask. He's also worked on evolutionary algorithms and artificial life and biological approaches to computer security and architecture. And with that, please join me in welcoming Professor Ackley to talk to us on artificial life and post-deterministic computing. Thanks, Enick. Yeah, I've hung around with Dan Dennett a few times and it's always great and I always try out my lame-ass philosophy on him and he swats me like a fly and then I go back and try to figure out what to say next time. So I'm not really mad that he got a call today, but we'll figure out a way to do it. Okay. I've learned over the years that I can't shut up and the only safe thing to do is start with the conclusions. So here they are. The message I want to leave with you, and this is something that I actually feel quite strongly about, is the way we're computing today, the way our computers work, all of them, is based on this idea of hardware determinism. And anything that's in italics on this slide, we're supposed to understand by the end of the talk. If it doesn't make any sense now, that's okay. Hardware determinism and logical inferences. And that approach is running out of steam. It's done its great work for 60 years, 70 years now, but it's pooping out. And furthermore, by its very nature, it's almost impossible to make a computer be secure. It's almost a joke actually, a bad joke. And society is paying a price for that bad joke. And it angers me that we are imposing that price on society as a computer scientist. I'm embarrassed for our discipline. We should do better. The suggestion is there is a way to do better. And the way to do better, I think, I have called best effort computing. We're supposed to give up on the idea of everything being absolutely perfect, accept that there will be flaws, accept that there will be mistakes, and figure out how to make progress anyway. At the end, can we guarantee the answer is right? No. But in fact, we never really could anyway. Not if it's a physical device as opposed to abstract logic. So we should admit that. The problem is this alternate approach, this best effort computing, changes so many basic assumptions that the way we're computing now are built on that it's pretty hard to even get started. If you think, oh, I'm doing something really radical and different because you changed two ideas of traditional computer science, that's not enough. Where you land is going to look strictly worse than where we are already and you're going to fall back into where we are today. So what I've been trying to do is hold my breath and take a giant leap and change everything all at once and say, suppose we could land, suppose we could find a place to land and plant a flag and build a little teeny colony that gets through the first winter without everybody dying and then could say, come join us. Let's build this little city here. We can do it. It's possible to survive. And the reason I know that there is a place to land if you change everything all at once is because the place that we land is the place that living systems live. Living systems are fundamentally different than the computers that we have today in ways that I will talk about today. In fact, they're complementary. At every turn, they go the opposite way. So if one is using computers, traditional computers, computers, mechanical computers of today as a model of the brain or an organ or a gene regulatory network or what have you, you can get a little bit of juice out of that, but you're also going to get a tremendous amount of misleading results because the model of computation itself is so different. The resulting approach I'm calling post-deterministic computing. I would call it non-deterministic computing that's a technical term in computer science and people would think they know what I meant when I need them to not know what I meant. And post-deterministic has all that flavor of postmodern and who knows what it is. So really, you have no choice but to listen if you care. And if not, well, somebody will get you to care later because they can't. So that's the idea. That's the conclusions, hopefully. Before we run out of time, we'll come back around to that. I want to tell an old fart story if you'll indulge me. In 1974, I came to Tufts as an undergrad, utterly clueless. I lived in Hodgden Hall. Is that still a thing? All right. The next year, I lived in Tilton. And then we moved off campus because it was so much cooler to move off campus. And I made friends with people who've been friends with me ever since. I saw some of them for dinner yesterday. But if I had to be honest, if I had to talk about where I really lived, I lived in the basement of Miller Hall. Why did I live in the basement of Miller Hall? Well, that's where the computer center was. Now, if you don't know what a computer center is, it's a place where you had to go to use a computer, excuse me, to use the computer. And it had all these little booths that you could kind of sit in with acoustical tile around them because what you were talking to was a typewriter. Not like that that had these reams of paper with the little holes down the side going through it. And that was how you programmed. And I had started programming in high school, and high school was punch cards. So this was great. And in my junior year, I think, put it this way, in my first junior year, I had some trouble. I had to get my head straight. It was relationships, other people. And I couldn't do it. I couldn't deal with that. I couldn't deal with my classes. I stopped going to classes. I sat in the basement of Miller Hall, and I wrote code. I just made things up. I said, how about a program to list everybody who's on the system by... Well, that sounds cool. I would just write that until it was working or until it was working good enough and I got bored with it. And then I would think of something else and I would write that, 18 hours of the stretch. And, you know, I was completely tanking. By the end of the semester, I would see people leaving the basement of Miller Hall to go take final exams in classes that I was registered in. And I was saying, good luck. And then... Like that. And really, the only thing that worked at this point was programming. And there was a reason for it because computer programming, traditional computer programming, once you get into it, once you get over the fact that it's so damn picky about everything. Another parenthesis, another ten, whatever. And the first one you start programming, it's just aggravating. But once you get over that and you can start talking to it, it's incredibly heady. It's an incredibly powerful feeling because you can make it do anything you want. And that's what I was enjoying. I was enjoying being the utter master of the universe. Get these numbers? Sort them? Format them? Compare them to those numbers? Print them out three times so they appear in bold on the paper? Cool. With conventional traditional programming, you are the master of the universe. Granted, it's a small universe. I could not program my grades up for the semester. I got some bad looking letters on my permanent record. But you know, I think maybe I'm going to get away with it. Seems like it's working out. And you know, I took time off and I got a job, I got shrunk, everything worked out and it was great and it's all come out. But that feeling of power is what I need from the story today because it's addictive. It certainly can be. It's incredibly pure. You are the emperor of everything. And if we stop and think about it, if we stop and be really honest, shouldn't there be a price for that? Are we giving something up by having that kind of power? Is it really real that kind of power? And the suggestion is, is that yes, there is a cost. And we have not been acknowledging that cost and that's why we're in kind of trouble today. And we're going to be in worse trouble tomorrow unless we start dealing with it. Okay, I got through toughs. I went off to grad school. I did machine learning and all of this increasingly crazy stuff which was all basically reacting against that traditional serial step-by-step, you're in charge, nothing happens except what you say approach to computing. And eventually, in the 90s, I've been looking at these ideas of artificial life which seems like a complete contradiction in terms. Life is natural by definition. If it's artificial, how can it be life? So we have to sort of expand the definition of life and say, well, when I say life, I just mean something that you can make copies of it and the copies do this, whatever. And I started to think all the things that life does, it has all of this diversity, it has variations within the species and so forth and that was important. And the computer systems that we have did not have that kind of diversity. Every single copy of Microsoft Word was essentially identical to every other copy of Microsoft Word, at least back in the 90s, down to every single same damn buck. Get a bug in, one of them, you got a bug in all. So I spent a decade trying to use ideas from life to make computers more secure. And mostly what that was was trying to separate a computer that says, I need this part of memory to have these numbers, I need this part of memory to have this shorted copy of those numbers. And that's it. That's all the program really cares about. But if you actually write the code, compile it and run it in the computer, there'll be all these other things that are true as well. Like this block always happens to be here, it always happens to be 100 bytes away from the other one, and that's completely irrelevant, but it just comes out that way when you do it. And the thought was, we should get rid of that. We should require something to be true. We should deliberately make it randomized. Because that's where attacks typically come from, or that's one big source of attacks, is all of these regularities that programs have that the program itself doesn't need. It's just a side effect of how the program was put together and made into an executable form. So that was the diversity arguments, building diverse computer systems like that. This is how it's, these are papers. And that was great, and we did a bunch of work around that, using diversity to improve security and traditional computing. Until I got to about 2005, and I gave up. I got depressed. I said, you know, this is sticking fingers in the dike, trying to block the flood. This is never going to succeed. We are creating new holes in the dike, dozens of times faster than any reasonable way to plug the ones that we have. Another year, another 10 times farther behind. And that was very depressing, because again, computer science is my field. And computer viruses, hackers, identity theft, all of this stuff just got worse and worse and worse and worse, and really, from my point of view, that was on our plate. Computer science is plate for all that happening. Most of it anyway. So when I came back from this, I said, we have to start over. We have to come up with a new approach to computation that just throws everything out and comes back in and then figure out how to get it to do what the current stuff does so well. And that's what I want to talk about. Okay. So you get the idea about how long I talk. So our outline for today. I want to talk about these two attractors of computation, these two different approaches, both of which I claim perform computation, although some people would disagree, depending on exactly what you think computation is. And then I want to introduce this particular idea called the neutral dynamics. The neutral dynamics of a system is what it does before you mess with it. When you have a programmable machine, but you haven't programmed it yet, what does it do? That's its neutral dynamics, okay? It's sort of pointless in computers today, but it doesn't have to be. Now again, I should warn you, especially if you're not in computing, just like when you're growing up and you go out and you go to elementary school and you say something that you think everybody is supposed to know and then you find out, well, it's only your parents that say that. Nobody sings that song except your parents. Nobody actually tells that joke except your dad. The neutral dynamics is one of these things. If you go up to computer science and say, what's the neutral dynamics of your system? They're going to go, you're an idiot. But it shouldn't be, and hopefully it won't be in the future. Anybody, any of the folks that read Dan Dennett's chapter, Brains Made out of Brains, for this sort of stuff, yeah. I mean, there's some fair thinking in that about the sort of top-down versus bottom-up approaches to it. And I have a certain amount of difficulty with the whole concept, but I'm definitely much more in favor of the sort of bottom-up approach. And what I want to suggest is there ought to be a way to be bottom-up and yet still have intention, still have a purpose, even though you don't get to specify exactly what's supposed to happen. Bottom-up engineering. That's what I think we need to be learning how to do, and I'm taking these tiny little steps towards that as we go along. And when we do that, when we start taking these bottom-up engineered steps, what we find we're doing is we're building little artificial life components, artificial life cells, modules, examples, something like that, that we might be able to put together and build more complex things that will do something useful for us, like add one-in-one and get mostly two, most of the time, but even if you blow a hole with it and stamp on it, it'll say, $1.8 rather than just say nothing as traditional computers will when you mess with them. And then let's do some demos. All right. So in order to understand what's wrong, we need to understand a little bit about how it works. And so the approach that I'm talking about is called serial determinism, or a little more generally, it's called hardware determinism. And what hardware determinism means is that when you buy a computer, phone, whatever, it's got a level of built-in of gates and silicon and all this stuff. And that machine promises that if you program it the same way and give it the same input, you will get the same output guaranteed, deterministically. That's it, okay? And it didn't have to be that way, but it is. We design computers with the hardware being deterministic. And what that means is software doesn't have to worry about faults and errors at all because faults are a hardware problem. If something goes wrong, the programmers, their hands are clean. Go get the hardware guy, I'm good. So the basic approach, and this goes back to Von Neumann and that's a lot of other folks. And it's based on these two major components, the CPU, the central processing unit, and the RAM, the random access memory. And the RAM, the job of the RAM is to remember whatever it's told and that's it. It does not have authority to say that doesn't seem like right, or you sure you wouldn't want to put another parenthesis there? It's not allowed to do anything, except remember what it's told and cough it up later. But it has to do that flawlessly. If you have a program that runs for 10 days and you write a number to a piece of memory on day one, it better be exactly the same thing when you read it back on day 10 or else it's hardware's fault, determinism. And the CPU, the central processing unit in charge just does this little loop over and over again. It has this notion of the current instruction that it's working on. It says gimme that. It looks at it. It says, oh, it wants to add this and this. Go get them, bring them in, add them, get the result, put it back in RAM wherever it is, advance my notion of current to the next instruction and do it over again. That's it. That's the computer. Every cat video, every game of solitaire is that. A billion, billion times every second. It's truly boggling if you actually sit and use a computer and you think about what's happening. If you think about what's happening to paint the John Neumann's face on the screen here, your head explodes. You have to pop up to this other level of abstraction, which is fine, but we cannot forget what's really going on down there. There's one guy who's saying, you plus one, you equals zero. If you are less than you, all day long. And that's really clever. It's incredibly clever, but it can only go so far. That one guy can only go so fast. I mean, he's going really, really fast. But if we want him to go 10 times faster, we can't do it. We have to cool him in liquid nitrogen because he's sweating so bad from trying to go that fast. And we have to start doing all kinds of incredibly heroic, other special exotic materials and so on and so forth. It poops out. You can only get one guy to go so fast. And the RAM, this whole idea that memory is completely the same everywhere. What you have to do is give the number of the piece that you want and you get it. It takes the same amount of time whether you're getting location zero or location 32 billion. That doesn't make sense either. That doesn't scale either. Memory actually takes up space. You have chips inside and some of the chips are farther away from the CPU than others. It takes longer in terms of light speed to get the information from the later to farther out chips than it does from the nearer chips. But when we engineer the machine, we take that into consideration and we make the whole thing run slow enough that we can get the slowest guy in and then say, well, that's it. That's as fast as we're going. So this approach, hardware determinism. This is how it works. It's beautiful. It's simple, but it's limited. It's only finite. If we're going to make really big computations, we're going to have to do something else. Okay, so let's face computation a little bit. A little bit itself. What is computation? It depends on who you talk to. It depends on what discipline they're in. But little bits and pieces of this. So what we just talked about on Neumann machine, step-by-step rule following with exact effects is the most common understanding that people have for digital computation, what it means. And the whole point is, is that if you make every individual step small and simple enough, then you can do it automatically. You can build electronics. You could build relays or water wheels or whatever, but electronics, to do those steps automatically and then go fast. And if you do that, many, many, many, many, many, many, many, many, many steps of these are required, and every single one of them must be perfect. Every single one of them must be flawless. Deterministic or else all bets are off. Hardware problem. And one thing that bothers me a little bit is there's kind of a new movement within computer science anyway about called computational thinking, that we need to get people to understand what computation is in nursery school or whatever, every grade, because it's so important to our future. And so important to our future, yes, I agree with that. But there's this sliding in that this hardware determinism is what it means to do computational thinking. And if you're doing something else, something gushier, like, I don't really know, but I'm just gonna get a whole lot of good stuff over there and a whole lot of good stuff over there and let them kind of get together and good stuff will happen. That doesn't get to be computation according to the traditional rule. Yeah. Are small steps required for it to be computation? So the classic example is a recipe for cooking, right? Heat butter and flour, whisk in liquids and spices, serve. That is a great recipe for gravy. How much flour, how much butter? You'll figure it out. You just vary how much gravy you end up with. And whether you use a spoon or a knife to serve it. What liquids? Well, you know, probably not gasoline. But milk, water, stock, wine, coffee, beer, sure. They will all make great gravy. It hardly matters. This is a great recipe, even though it's not tiny little individual steps that could be executed mechanically. Why do I not have to say asterisk do not use gasoline? Because the hardware is supposed to be taking care of itself. The hardware is supposed to know gasoline is dangerous, do not eat. Wouldn't have to be specified here if we could specify high level complex steps that require interpretation. That the thing that is running them, the thing that is performing them has a degree of agency. It's protecting itself. And we can count on the fact that it's protecting itself to not do completely stupid stuff. So finally, if we're stepping away from von Neumann hardware determinism as the definition of computing, and we're saying there's something you can sort of bigger and maybe it won't work, maybe it'll work sometimes, your mileage may vary. Whoa. That's the opposite of determinism. Your mileage will be exactly equal to my own determinism. Fundamentally, the idea is what computation is is interpreting a physical system for some purpose. Taking some observation of some physical system and using that saying, oh, well, I think the amount of electricity there means the number two. And so two is the answer. And that puts together two of the sort of most common interpretations of computation in terms of mapping and in terms of utility. I have a purpose for other things. The problem is, and I just want to end with that being a little bit confusing, is all right, well, if we have this great gravy recipe, that's obviously a program. It's teaching you, telling you, instructing you how to make gravy. And on the other hand, we can imagine if we look at the weather report to see if it's going to rain, that's obviously data. But it's not so simple in the computation writ large perspective. Because for example, if I am looking through the cookbook, trying to find something that I can make without going shopping, now the cookbook is purely data. I am running a filtering program over it. On the other hand, if I always bring an umbrella, if the chance of rain is greater than 50%, then that is a Dave umbrella carrying program that it's running on me, the weather data. We bring our interpretation to where computation is and what the roles are. We just have to live with that. Okay. But if all these things, making gravy, big complex things are supposed to count, doesn't that mean we have to give up on the original idea of hardware determinism? Well, yeah, kind of. But we have support for that. The same guy who gave us hardware determinism said, oh, this is just a beginning. This is not the way computing is going to be in the future. In the future, he says, the logic of automata will differ from the present system in the formal logic in that the length of the chains of operations will have to be considered. You won't be able to do 200 billion operations to put the queen on the king because it won't be reliable. You have to keep the program short. And furthermore, each individual operation of logic will have to allow a probability of failure, a probability of malfunction. That's exactly what we're talking about. Giving up on hardware determinism and going instead to, well, he didn't really say. He waved his hands and suggested maybe thermodynamics, Boltzmann number, Boltzmann distribution, something like that, which actually I quoted in the neural network paper that came in the 80s. And he thought, when was this going to happen? When are we going to give up on hardware determinism and start growing up and getting serious about hardware? He figured it'd be about the time we get to machines that have 10,000 organs, by which he meant 10,000 gates, by which he meant 10,000 transistors. He figured by then it would be so complex that we would start this approach. We are now well over a billion transistors in the stupidest computer that you can buy today. And we're still doing hardware determinism. It's crazy. It's nuts. It doesn't scale. It's unsecurable. And if we're thinking computing could be much bigger than this, which I do, we are climbing a tree to get to the moon. We are going to have to go down the tree first. It's just a question of how much society, pain and suffering we're going to have to go through before we reboot. So my mission, the reason I'm here talking to y'all today is to try to hasten the day when we say, wow, there is another approach. There is another approach and we need to study it. We need to research it. We need to figure out how to engineer without hardware determinism. Okay. All right. So in one slide, I'm not going to go all the way through this because it would take a whole talk by itself, but just to show the counterpoint, just to show how these approaches are so different in so many ways. The middle column is traditional, I'm now calling finite scalability, the hardware determinism approach. And the right-hand column is the alternative that I'm suggesting, the indefinitely scalable approach. And just down the road, different, different, different, finite scalability talks about algorithms. What's the property of an algorithm? If you give it the information at the beginning, you then stop and wait, and it gives you the answer at the end. It's inherently finite. And if you think about trying to get along with algorithms as if they were, you know, like people, you wouldn't want to. They're like big prima donna jerks, right? Say, well, no, tell me everything that I need right now. You can't change it, just tell me what I need. Okay, wait, wait, wait. I'm computing, you're welcome. I will be in my trailer. That is an algorithm like that. If the world changes at all while it's computing, then it's often a trailer like that. Computational processes are very different. The whole point of a computational process like an operating system or a web server is to never end to always be there to take the next request, to take the next request, to take the next request. It's virtually the opposite of an algorithm. In finite scalability, the focus allegedly is correctness. If your algorithm is incorrect, you cannot even talk about it. It doesn't make sense. You can't call it a sorting algorithm. If it doesn't guarantee the sort, you've just made some incoherent goo. Well, that's nice as far as it goes. But what is correctness? In tiny, tiny little finite examples, addition, you can define correctness pretty well. But in almost all actual computations that are used by actual systems today, what is the correct answer to a Google search for Dave Ackley? Except for the fact that I come up first, I don't know. And nobody knows. Why? Because new inputs are coming in continuously. The world does not stop to let the Google search come to equilibrium. There is no sense of correctness. So the number one requirement that will teach us when we take computer science classes, your code must be correct. It doesn't even apply in the real world most of the time. Not for systems, not for things that actually do stuff for people. Once you've gotten your code correct, you try to make it as efficient as possible. And we praise you, oh, oh, you're much more efficient than ehh. Ah, ah, like that. The more efficient you can make it, the better you are as a person. That's the way computer science works. But it turns out when you look into this, and I can't go through it in this talk, but you can find it in the little paper that I had as one of our options, that every time you make an algorithm more efficient, you also make it more fragile, so that if something does go wrong, it goes worse wrong. Why? Because the way you made it efficient was by making the smallest observation of the data that you could, and then making the biggest change on the basis of that. That's efficient. As long as you have an ironclad guarantee that everything's perfect, it makes perfect sense. But if someone gets at your hardware with a heat gun, or maybe there's a bug in some other piece of the code that messes with your code, it all goes out the window. When it fails, it fails catastrophically. And computer science either says, hardware problem, or they said, I didn't write that library. Ah, ah, like that. And so on. Write on down the list. Centralized control, distributed control. The dynamics are deterministic versus stochastic and so on. And then all the way down to the master of the universe, that attitude that got me through, growing up a little, in the basement of Miller Hall, master of the universe versus member of the team. You don't even know the big picture when you're in an indefinitely scalable computing environment. But you have knowledge about how to make things better. I don't know what's the big picture here. But I know there's water over there and there's a sandbag over there. I'm going to pick up the sandbag and put it down over there without waiting for anyone to tell me. Okay. That's the way indefinitely scalable computing works. Member of the team versus master of the universe. All right. Last thing, and then we'll do the rest of the time with demos. So it's like a Zen coin. What does a computer without a program do? Whatever a programmable machine does before we actually program it, before we interact with it in any way, we're going to call that behavior its neutral dynamics. Now for a typical computer, before it's been programmed, it does nothing. It's just sitting in a single state waiting for input, waiting for the load program button or the reset button or something to be pushed. Now increasingly, that's not true in these days. These days, of course, when you come on a new computer, what happens? It's a credit card number. Because the neutral dynamics of computers have changed since they were just simple little wait for program load. But it's worth thinking about that. What is the behavior of the system before we make it bend to our will? And once we start thinking about computation in the large general philosophical sense, it matters much, much more. If, for example, my phone has died and I actually want to interact with another human and say, excuse me, could you tell me the time and if they don't run screaming, maybe they will tell me? Well, that's an example of a computation. But the thing that was running the program, the person, the neutral dynamics, was they were not just sitting there doing nothing waiting for us to come and ask the time. They were standing at the corner, they were reading their phone, they were waiting for the bus, whatever they were doing. And whatever they were doing affected what we could do with them. If they had their headphones in, we'd have to do a different kind of computation. Could you tell me the time? You run screaming because we touched them. So when it comes to what could it mean to do bottom-up engineering, what it really is going to mean is we want to shape the neutral dynamics of hardware, little pieces of hardware to automatically be doing useful stuff like cleaning their desk and putting their head down or whatever before there's any program at all. Why? Because they have been told, they have been trained, they have been taught, they have been given programming, these individual teeny little things saying there are things that should be true of your local environment. You should be in line with the guy on your left and the guy on your right, whatever. And they're just going to go ahead and do that in complete absence of any input from on high, from top down. Okay? For robustness, computational mechanisms will be taking care of themselves before they're told what to do and the way we are going to build really robust systems, the way we are going to build systems that you might not be insane to let them drive your car is by composing these neutral dynamics on top of each other to make things that inherently care about safety because that's their first concern before they even listen to see what's going on. And that is so different than traditional computation where the RAM does nothing. The RAM can never change a bit on its own. It's not allowed. Now, the combined RAM and process space is continually saying, oh, I'm supposed to be what they are. Well, they're different. And we're going to engineer with all of that. An analogy that was mentioned to me one time is the change in making movies from the studio model to the Hollywood model. In the good old days, the studio owned everything from the cameras to the sets to the actors, and they dictated what happened. But apparently, you know, they were about to get broken up by the government for being a monopoly. So they switched, and they fired everybody, and now they just hire them back for one movie at a time. And that's now come to be called the Hollywood model. So you need a director, you hire a director. He's just on for one thing. You need a steadicam operator. Well, I know a guy who's great with a steadicam, and he's got availability in the next six weeks. And you put them all together, and then you let them go apart. But each individual person is an individual agent that takes care of themselves rather than waiting for the top-down input. All right. So I think I said most of this. What is bottom-up engineering? It's going to be shaping the neutral dynamics like this. Developing active machinery. That's the key, where things are going to be changing state, looking with neighbors, making decisions, and updating themselves without waiting to know what the official top-down program is. All right. So let's do a few examples. All right. What we are looking at here is the simulator for the movable-feast machine, which is a research, indefinitely scalable architecture that we've been developing for the last several years. And each of these gray boxes that you see here is meant to be simulating an individual little patch of tile, the separate hardware that has communication to its neighbors, but it's completely separate, and we could have, instead of just two by two, seven, we could have as many as you want, it's indefinitely scalable, okay? But this is running on my laptop, so two by two is kind of enough to sort of slow things down. We have a table of elements, which is these are little guys, little agents, little whatever that I have invented to study what they do when you give them rules for what they should do by themselves and then let them loose to see the emergent behavior I don't like using that phrase, but it's kind of what it is. You make the rules up for the individual guys, but that doesn't tell you exactly what's going to happen when they start interacting with each other, okay? So like we can pick W that stands for wall. Wall does nothing. So it just sits there. It acts like a dot. Now, this is important. This is sort of like a paint program, but in fact each of these spots is alive. Each of these spots is getting events. If that guy wanted to do something, he could. So let's take an example, Ray. Ray is a guy that when he gets a chance to go, he makes a copy of himself to his west. Okay. This is a west going line. Might be useful for something. Now, it's not just a west going line. It's really serious about going west. If I come in and erase a chunk of it, well, my wall gets nuked, but the Ray is fine. It hasn't given up wanting to go west. It still wants to go west. The only reason nothing was happening was because it was already filled in. Now, if I start chopping it from the east, it's got no response, right? Because there's nobody who says go east in a west going Ray. It's like a Dr. Seuss book. So that's another example. We could take a slightly tamer version of Ray, a line that wants to go a certain amount and stop. So now instead of trying to fill the universe, it's a finite line segment. And we can erase chunks of that and it'll actually fix it either way. See how different this is from our old dead stuff where you put it down once and then it just stays there because it's not allowed to do anything until me, the CPU, comes back and makes a change. This stuff is empowered to do whatever it's programmed to do. We can take the idea of a line segment and make four of them and build a box. Now we're actually treading towards something that could even be useful one day. Now we have a division between inside and outside. We could have different properties, different stuff going on inside and outside. We compare that to what I call the dead box which this is the history of humanity up through the Industrial Revolution. This is where we developed machinery able to stamp out boxes automatically which is really great. But of course they're just dead boxes so if they start to fall apart well they stop to fall apart. Now we have a dead box. Of course by comparison the living box fixes itself. So let's just do a quick one here. We'll get a dead box and we'll get a living box and time's getting really late. Well I want to do one thing first and then we'll come back and look at this box then I guess we'll probably end up on that. This guy here, he's hard to see. That little dot? Wait, I'll just keep going in. There we go. This guy, his name is Dreg and it stands for Dynamic Regulator. And the rules for Dreg are very simple. So suppose I use my little wand of life and I say, okay you get an event. All right, what did it do? It went to an adjacent square. All right, it went to an adjacent square. So the fundamental rule, the way Dreg works is when it's its turn to go it picks north, south, east, west at random and looks to see what's there. And if it's empty, it throws a random number and maybe creates a resource there an atom called res. And if it doesn't create an atom cause res it just switches with it which means it moves into the empty square and that's what we've seen so far. We click it, it's moving into an empty square, right? But if we let it run for a while... Oh, there we go. That brown thing there which should be a little more visible is an atom of res. Here's our Dreg and so forth. So somewhere along the way it found an empty square it flipped a coin and said make a res, like that. And when it sees an empty square with a very small probability instead of making another res it makes another Dreg. And then that Dreg is independent and it goes around and does the rules itself at the same time in parallel with the one that we had. So that's half the Dreg store. The other half of the Dreg store is when it picks one of its neighbors north, south, east, west and examines it if it's not empty it throws another random number and maybe erases whatever is there. And in fact it's more likely to erase something in an occupied square than create something in an empty square. Two to one. And actually you can mess with the probabilities down here with these sliders. So if we stand back and say, well what's going to happen? Well let's do it this way. I'll get rid of these guys. I don't want them anymore. Suppose we now take a single Dreg and put it inside a dead box and a single Dreg and put it inside a live box and let it go and see what happens. Well they bounce around. You might be able to see, uh oh, they got out of the box. How did they do that? Now they're creating, well I'm going to make another one. I'm a vengeful guy. Yeah, here we go. Over time what happens is the dead box just gets eaten up because the Dreg happens to be sitting next to it. It happens to consider it. Its number comes up. It erases it. It's just the way it works. The live box, conceivably that could happen too, but as soon as it gets another chance, the neighbors of the live box will come in and regrow. It's much, much tougher stuff. We can live in a world that is not completely friendly and that's what I love about Dreg. Dreg just goes around and erases stuff at random. If we imagined having our computers that had these things running inside them, programmers' heads would just explode. Well, no, no, no, I initialized that. Erate is zero. Erate is gone. That sort of thing. That's what we're talking about. So if you design a program so that it can actually work when there's Dreg in the system, you have this base level of robustness, this base level to handle your own assumptions, the things that you said should be true, being violated before you even start. And why is it great that Dreg does it? Because Dreg makes res. It makes a base atom which is designed to be used to build more complex structures. You grab a res, you make it into something else. We're almost out of time. But maybe we have a chance. All right, so I'll finish with this. This is hard to understand. All of that brown stuff there is res. There we go. Res that I just scribbled around. Here's a little pocket of Dreg that I just scribbled around. And then down, where is it down? Should be a red guy someplace. Oh, there it is. This guy is a sorter. And what he does is he looks to his right to see if there's a data item. The data items look blue. And if there is, he moves it from right to left and he also moves it up or down a little bit depending on whether he thinks it's little in which case he moves it up or big in which case he moves it down. And furthermore, before he does that, he looks around and if he sees a res, he converts it into more sorters. So he dynamically assembles a sorting horde from the res. So let's start this up and see what happens. There's a single guy over here. This guy emits random data points. He also actually builds a whole little column so that we have a bunch of emitters all going at once. And there's a guy over on the other side there, a pair of them, that consume data that gets to them. So here we go. So you see the res bouncing around. The emitters have assembled themselves. They're supplying data into the grid. There's hardly any res left because all the res have been turned into sorters. And there's still dregs there. They're very hard to see because they're rare and they're creating more res but the res is getting instantly eaten by the sorters to build the sorting network. And over time, this is still sort of warming up. There's a lot of blue clustered around the thing. This thing will actually settle down and start pulling the data through fairly well. And we can change the color scheme if I can find it. There it is. Now what we're doing is we're coloring each sorter. Instead of coloring them red, we're coloring them by what they think is a threshold between big and small. So if you look over on the right, you see this big hash of all different colors, but as it moves right to left, the colors gradually get more and more laminar because this thing is sorting the data as it's moving through. And by the time it gets to the consumers on the left edge, it's actually got them sorted pretty well. Does it have them sorted perfectly, like quicksort, like bubble sort? No. That's not even well defined. The emitters are emitting data at any moment. How do you even know what the maximum is? You were just ready to send that 750,000 guy out as the maximum and somebody at the beginning pops in an 800,000. This is not even admissible to a clean notion of correctness. So what? It's great for prioritization. Signals come in. One way less important ones go the other way. And this thing is as robust as the day is long. We can blow holes in it and now it's all gummed up. It's got stuff all collapsed there. So it just takes a little while to heal. Why does it heal? Because there's still drag in there that are still generating res that are still diffusing around that are getting co-opted by the sorters. The sorters are moving around. The machine rebuilds itself. The only way we can actually really mess this up is if we blow it away so bad that we completely destroy the emitters or the consumers down to the last guy, then they have no way to come back. Okay. So I leave this for you as a tiny little sample of what best effort computing could look like. Is it absolutely correct? No. Correctness does not even make sense in the limit. But it's robust as hell. Thank you so much for listening. All right, let there be a couple of minutes. That means it has to go for, as you know, everybody else will have to stay for a second. So we have to eat a couple of minutes so we can go on for those who can stay. All right. Anybody here? What do you think? Very easy, very easy. Yeah. How do I implement this? Okay. Great question. Something like this. This is a white card. It doesn't even know what we're designing. It's not very able to work tomorrow. Ah. Yeah. So the idea is each of the tiles have been solved. Yeah. Whoops. Well here. So this one we only have two tiles. The idea is each of the tiles is going to be running small memories, and have communications to all the neighboring tiles. And essentially the border of each tile is cache memory. And whenever they have an event that lands on the cache, they have to coordinate with each other and keep a lock saying, okay, well you can have control of that a little bit of memory. I will go ahead and move that elsewhere. That's fine. Until I get the cache update from you to say what happened. And then we release the lock and go around the business. So essentially that's what we're doing. And I didn't know about the online machine. But the slogan is there's really nothing wrong with the online machine that can't be fixed by the individual components in any part of our system. So that's what we're doing. We're trying to find out a single machine fair to say that you're being motivated in part by the way modularity in the brain might work in various types of calculations that active living organisms use. Things like vision, object, and so forth. And make it very miniature into all the various constituent types of calculations that would be going on. So the question is you know, the brain is a system of modularity and all these things going on in every annual scale. And I'd say competence. Number one, modularity yes, absolutely. Although a lot of different people and a lot of different things have to be a little careful because sometimes people think modularity implies hierarchical decomposition which it doesn't need to. So, modularity yes. And that get is this is the e-act of the active scale building. This is the fill room the kind of old tower and so forth. If we can get the future engineering out of this, I'm going to play this back. We're going to get down to the scales that we often see going on in the active system. While the concept was made in the form I was talking about the possibility of having quantum mechanics involved for an inherently non-deterministic I had all kinds of questions about quantum mechanics and a little bit of separate but certainly that scale was open. And that's one thing that in fact is possible. If someone wanted to hack a system that wouldn't be possible for them knowing what the architecture would be in modules and being able to simulate the other modules and so forth. The third question is that primarily to two models well three. One of the lines is that there are no guarantees here. So, this is part of the problem to your experience but of course there's nothing. So this is not going to guarantee skewer as well. There's going to be everything going on. All the things are happening in these things and we're just going to have to engineer that just like the evolution has done over the years. The final answer is for researchers these tiles that we're building are re-organizing. They have a functionary that define the laws of physics for the time. There's a lot of physics but for these purposes we allow it to be re-organized so let me say how it goes like that. And as a result if you can attack the channel over which new physics gets distributed from time to time you can take over the system. And the goal would be in a fully developed and definitely scalable system we would review our smart science engineering math brains to figure out a table of bounds that these 128, these whatever are all we need and then we get rid of the programmability and allow it to look through this entirely when you're dedicated to chips that's all you can do and then the spirit goes up and it's time to shine. Then if you want to attack you have to go over that cycle cycle cycle which is what we love and that's the area where you can manage your emails and stuff. Yes, so I really love the project so much but it's really cool because I'm really sad that it isn't here because this is basically taking his real patterns favor where he talks about using Conway's life to run an engineering machine that's kind of like I see it as somehow a next step of this idea Well, yeah, except for the fact that Conway's life I'm sorry. So the comment was about Well, the question really is that I see how it's scalable but I'm worried about efficiency so as you're going to make the system I think that you will need to make the system really big so you'll have them do anything useful and then that doesn't seem to be the most efficient way of computing the same reason why we don't we don't use Conway's life as a method for calculating I agree with you so it's about this seems like an extremely inefficient way to compute and if you imagine taking Conway's Game of Life, which is a famous cellular atomic model and using that to compute it would be an incredibly giant grid required to do the stupidest possible thing and the answer is yes, your intuition is really wrong for the following reason because you think you have to have a gigantic grid well, you know, what do you think a monitor is? 1920 by 1080, that's a million elements right there on every damn computer monitor a decent one and how much more would it take to put a small amount of logic behind it it would take a factor of times 4 times 10 times 100 it depends how much logic you put there but we can certainly, the computer engineers can certainly make these guys automatically spin down to low energy states when they haven't had any transitions for quite a while but still be able to make transitions if some way comes through from the neighborhood and so forth so the point is to decide what we need and then let the computer engineers go crazy with the efficiencies and then of course Conway's Game of Life is really evil because it's deterministic therefore it's finite therefore it's got no interest at all so it's not a small step from Conway's Game of Life to this in the back? yeah this is a top regulation for sure it's possible right now we're dealing with a single fixed neighborhood of size and an angle distance of 4 41 cells that's it and the only thing that makes me hesitate about pathology and thinking about all methodologies is in order to calculate this indefinite scale it must be realizable at the indefinite scale so a lot of people say well how about course? and then gravitation around here would open into problems but it's not here to be able to embed the indefinite scale of course in invisible states you can do it but you have to open into weird little acts and fold back and so forth and it makes for the kind of non-speaker states that's not as appropriate so really rather than psychological I'm going to say the problem only in these kinds of systems should humans, when it comes to acts people want to have some actions well, how can you look at the traditional future? yeah there's a feeling that traditional computing what we did then and the stuff is useful since it's used in larger movements with the shape of the machine that you've been on it's the principle logically it's going to go all the way back to the neighborhoods and the program or you could say something in this system you can't say that but you know what's going on number one is in fact you cannot do the acts of location all the way back from anywhere unless you keep all of the site results and it don't send any information on the forward computation there are systems that do that but in practice it's totally on the start and the problem is what if we get an explanation of the wrong level of data why would you do that? because it's added in the first six months before 9 I thought I would do that and the problem was saying because I heard 9.0 9.1 it is not going to have an explanation that you can go over it anyway and the second part of the answer is that when we go up on logical inference what we're going to gain is statistical inference we're going to gain there are so many sources there that the odds of a guy being more than six sites off is 10 to the 10 to the 10 to the minus or whatever it is and it will explain that to you and if something went wrong beyond that then we have to say we're going to have some people hold and it will shock them and then days are passed so it's a little stressed 10 to the minus