 OK, well, the day has finally come. I confess, I feel a little bit like Samwise Gamgee standing on the dock with the Great Havens, watching the ship that will carry Frodo into the West. But saying that, I probably presume too much. I'm not Sam today's Frodo, Mary, or Pippin at best. But in any case, it's my great privilege to introduce Dave Ackley, today's speaker. As all of you know, Dave is retired after a long and distinguished career at UNM. Over the years, he has made enormous contributions to the computer science department in teaching, service, and scholarship. He's known to his students to be demanding yet devoted and extraordinary teacher. He's known by his colleagues to be principled yet unorthodox, a thinker with a highly developed sense of humor. Neither our classrooms nor our faculty meetings are going to be the same without him. To me, he's the guy who's occupied the office, next to mine, for the last 20 years. And if I had described one quality I admire most about Dave, it would be his generosity. His door is always open. Those of you who have not taken advantage of that resource over the years, well, it's your loss. Because Dave is a gifted and generous listener. He is generous enough to listen and gifted enough to understand. To repeat back what has been said in his own words, to provide constructive criticism no matter how exhausting for either of us. And it frequently is. But there is no doubt my intellectual life has been made immeasurably richer by knowing Dave badly. So thank you. Let's talk science. Science would be impossible without making assumptions. Assumptions give us traction. Without assumptions, we cannot make progress. Some assumptions are explicit and known to those who make them. Euclid's Parallel Apostles, for example. Others are implicit and unrecognized. Some of the greatest breakthroughs in science happen when assumptions, whether they're the known and seemingly indispensable kind, or the previously unrecognized kind, are abandoned. Abandoning the Parallel Apostles led to non-Euclidean geometry. Abandoning the idea of an absolute frame of reference led to the theory of relativity. What assumptions are we making right now about the nature of computation? Assumptions that have given us traction and formed the basis for our technology, but have outlived their usefulness and are now holding us back. I don't think you understand how truly radical Dave's thinking is. When we have a department chair who builds computers that play tech-toe out of DNA, Jan, that's nothing compared to what Dave Ackley's going to talk about today. According to Dave, so what is the assumption we've been making? What's the one that's holding us back? In my own words, Dave can dispute this, it's that a computer can be contained in a box of great, priori known size. I mean, that hardly seems controversial. Could one imagine a computer that couldn't be? Well, he can. And he's going to tell us about that computer, how it can be programmed, and what advantages it has over more conventional computers. He will describe in short order why such indefinitely scalable computers are the key to a post-vinoinant future. Over the years, Dave, you've generally listened to us. Now it's our turn to listen to you. But thanks. And it's always good to jump on the chair. It's always a wise move. I want to talk about living computation. That's the way that I frame the work that I've been doing for most of my career. But I want to do it sort of in the way that Lance said, by starting with a loss, starting with what we lose in this process. So I have the first part of this talk is a bit of a funeral, a requiem for hardware determinism. And let me just do that. Hardware determinism for computational growth is dead. I come to you today to bring word of its demise. It actually died some time ago, but using its past name is not what I know. It comes to you today to remember the life of determinism, to celebrate its virtues, to understand its sins, and to lay shame upon its abusers, myself included, who expected far too much from any one idea. And in doing so, step by understandable step, innovation by marketable innovation, consequence by unexpected consequence, use hardware determinism for computational growth to build a world full of amazing and crucial technological morals, but a world nonetheless in which systemic fragility is expanding, security and security is dashing, and individual freedom all the way down to the literal mathematical degrees of freedom of individual agents within a system is declining globally, as never before, because we ask determinism to scale in a way that it could not do. And the computer architecture, design principles resting on top of hardware determinism don't scale in. I come to you today to seek your help, to dethrone a zombie tyrant, and to bury a friend, and to look toward the dawn of an era of truly large-scale computing. Indefinitely scalable computing, where robustness is first and determinism is deployed only in the small, which in the end was all it ever bought. Hardware determinism for computational growth is dead. Let us let it rest in peace. I first got to know determinism when I was a kid. It was always clear to me, before I was anything, that this was my mission in life. I needed to make something, do something by itself, and that was inherently cool. And it really didn't matter what it was. It was always the little cars that had two batteries in it. I would try to put a third battery in it. If it didn't have any batteries at all, I would glue little cardboard sails on it and blow it down the hall with a box fan. That was actually a lot fun. The problem is, I could never get it to do the next step. I could never get it to go so that it would tip over the next thing and do the next thing. It would always miss. It wasn't accurate enough. That all changed in 1971, here at Lord Mellville High School on Long Island, to be specific, right here. Ah. At least, at the time, that was where this was. An IBM 1130 computer. Oh my god. It was so easy to make it do something by itself. I couldn't believe it. Let me tell you how unbelievably easy it was. All you had to do was take some of these pieces of paper. What would be in this machine is a totally cool thing that would punch holes in the piece of paper and take them out from up here and go across here and then collect what would be here. And when you had typed in everything you needed to type in, you'd pick up everything out of here. You took it over to this thing. This is not the original. One of some of them is this assembly where I just got these dishes off the net. You put it in this thing, and it would read them in, and it would send them over to this thing, and it would do it. Yeah, it would just do it. Exactly the same way. And typically, you wanted it to say something, so you printed on the printer. Uh-uh. 100 pound primes, don't you think? Typically, every once in a while, I would make a mistake. Make a mistake. But it didn't matter, because we had a line editor. You just went back to this thing. You found the card, punched up a new card, put it in, replaced the whole line. It was so incredible. It was so easy. But you know, even at the time, even in high school, even in 1971, there was already a two-class structure emerging. My friends and I, we were in the computer math class. Whoa, we were cool. We had to use these machines, the O29 key coaches. There was another class of the business data processing students that were competing for the same machine. But they had to use these things. We got the Star Trek. They got the straight out of the 50s. But the point is, there was only one of these. So there was inherent competition for valuable computing time. Limited computing time. And they kicked us out at 3 o'clock every day. Crazy, ridiculous. It sat there, turned off from 3 PM until 8 AM. It couldn't have been doing our programs. Well, so we thought of a little like, here's the computer room. We just figured, you know, at quarter or three, something like that, we're working away. They're about to kick us out. But we were just like, real casual, like kind of a lock, one of the windows. And we locked the door like we had to. We would go, we'd wait for the buses to leave. And then we would make our engines. We'd slide the window open, we'd go in, and we would hack our brains out until dinner time. It was great. You want to write a program to calculate 87 to the 87th power? We did that. Cool. You want to write a program to make little random hike crews that everything about them was wrong? Yeah, we did that, too. Or trance. But of course, you know how these kind of stories end. It wasn't that long before we got caught. And of course, it was Coach who caught us. And we got in trouble in our computer math teacher was so disappointed at us and so on. But the point is, is that the power, the feeling of power, of deterministic execution, a machine that will do exactly what you want over and over and over again is incredibly paid. As I say to all of my students in 351, you are a master of the universe. Granted, it's a small universe. But it's yours. You can do anything you want with it. It's incredible. In a way, it's too incredible. And that's really the story of why I'm here and what my research career in computer science has been about. Let's talk a little bit about hardware determinism itself and then about why I think dead. And if it's not dead, we need help to kill it. And we do need help to kill it. There's a reason for that. The reason that we are the way we are is it makes sense. But we need to work on it. And then I want to save enough time to actually do some demos of what's the sort of latest in the alternative that I'm working on. The important alternative is this idea of best effort. Give up on hardware determinism. Hardware no longer says it guarantees to do the same thing twice. Instead, it says, oh, man, I'll really try to do what you said. But of course, you know, hey, you can hung up. You can hung up. Hardware reserves the right to fail or trep. Best effort means it will mostly be right. Mostly be perfect. But if it fails, it may fail arbitrarily. You cannot put a probability distribution over the possible failures. Anything might go wrong. That's best effort to find. Do you want a program for a machine that's like that? Hey, my program didn't work. So did you try it again? Yeah, I tried three times. It worked once. Well, who knows? OK. So the idea of hardware determinism goes all the way back to the beginning of digital computing, which von Neumann gets most of the credit for, although plenty of other people were involved in the particular effort, and Mopley did an awful lot of the engineering design that got written up by von Neumann, and then got his name stuck on it, which just goes to show that whatever you think your education is about, what it really is about is learning how to write. If you write, you win. If you say it better, you win. Doesn't matter how good your idea is. If you say it in the way that nobody gets it. So the idea is that hardware takes physics. Physics is noisy. It's stochastic. It's quantum. It's all just, bleh. And hardware has to turn that into absolutely perfect logic. Perfectly square with the inference from one to the next. It's guaranteed. Perfect. And if anything ever goes wrong, if it's unable to do its mission, it swears to crash the entire machine and end the insanity immediately. That's hardware determinism. Software, for example, says, OK, you're going to give me automated logic. I can now make transitions from one state to the next. And the definition of the previous state is completely specific. And the resulting state is determined only by my program. That's the power adapter you get as a programmer with hardware determinism. Hardware provides reliability. Software provides desirability. Hardware without software is a doorstop, space heater. Hardware with software has to make enough money to pay for the whole hardware and software together. And that's the computer revolution. It worked great. One guy from the very beginning said that this was not actually going to work. And we all know, hopefully, that was von Neumann as well. And he pointed out that this whole idea of doing perfect transitions with 100% reliability, billions of billions of billions of the row, eventually we have to stop. Eventually we have to say, well, you know, that's a whole lot of state transitions. You really need every single one of them to be perfect. Software says, of course, that's what you signed up for. If anything goes wrong, it's a hardware problem. And we're going to have to allow the operations of life, the fundamental primitives of hardware to face with some small probability there. Von Neumann said this in the late 40s. He had an alternative about how to do it instead. His ideas for alternatives went nowhere. And his von Neumann machine hardware would determine execution went everywhere for the next 70 years, and it's still going. Why is it that this hardware determinism is so hard to give up? I've been thinking about this for maybe 15 years now, especially as I do more and more talks. And nobody tells me where I'm wrong. They just sort of clap at the end and go away. Which is okay, I get it. It's on me. I'm saying we need to do something really different. Extraordinary claims demand extraordinary evidence. Sure, work on it. But I now understand that this idea of hardware determinism allows a mindset for the software engineer, for the programmer, for the computer scientist, of focusing on correctness and efficiency only. And once you do that, a whole lot of other things sort of follow for free, and it creates this mutually interlocking set of design assumptions and ways of thinking that all reinforce each other. The first one is, well, of course, if the most important thing is correctness, which it is, right? I mean, it's okay to write a program that's got bugs, what does it do? Well, I don't know, it does something. From the theoretical point of view, from a mathematical point of view, a program with a bug is not a program that performs the thing at all. The whole idea of being a sorting out is the numbers have to be sorted when the program stops. Correctness and efficiency only has nothing to say. They say, well, you know, I did pretty good. Come on, look at that. Look at all those numbers, they're in the right place. Ah, maybe there's a few. Correctness is a boolean value, it's strict and correctness, all or not, which gives you the idea that you're not done until you find the right spot, of course. Efficiency is a pure virtue. That one's really, really easy. Efficiency is always good. Does that make sense? Does anybody disbelieve that? Does anybody believe efficiency is not always a good thing? You're on? What's the alternative? Rebustness? Oh. Rebustness. You look at any of the last of my 10, 20 tosses for the best. Sure. The alternative to efficiency is not necessarily the best. The alternative to efficiency may be robustness. The ability to still do the right thing or still do almost the right thing, even when the system is under ridiculous stress. That part before about hardware being allowed to fail or to travel. Scale independence rules, size doesn't matter. You just, if it's an N log N algorithm, you can let N go to wherever you want and everything will just file. Nothing will ever change. N log N today, N log N to the future, or whatever it has to be. And if there are any faults, this is the last dish statement. Okay, all right. If there's a fault, really that's a hardware problem. I should go home and you should sue the hardware guy. But, you know, okay, if there is a problem, it's important, you know, it's like in a sub-driving car. The problems of faults will be infrequent and well-behaved in probabilities. They'll be independent and identically distributed. Once you have these, and by the way, just because I put the slide up, those are all wrong, evil, bad beliefs. You're not allowed to just show the previous slide and say, look what Akri says. This, in fact, is a conspiracy of mistaken and dangerous beliefs that are all purchased by hardware determinants, that are all ultimately due to the fact that because reliability is a hardware problem, why would software do anything more than once? Makes no sense. The answer has to come out the same way. The answer doesn't have the same way. It's all a hardware problem. All right, I claim you may disagree. This is, to me, it's like the Tom Lehrer joke about feeling like a Christian scientist with a pen of size. The world we are living in. I guess that was inappropriate. I'm sorry. It wasn't my joke. Things aren't working so well. And what in particular is not working well at all is computer security. Computer security can be seen to involve at least three different things. There's network security, dealing with distributed denial of service attacks, all that kind of stuff. There's host boundary, post-order security, trying to keep the bad things out. And then there's host internal security, actually trying to find problems inside. And of those three, network, order, and host, really, we've made progress only on network security. And the reason for that, I suggest to you, is because the network is still spatial. There are physical places that you can measure traffic, divert traffic, cut traffic, say, culture that you cannot do once you're inside a host line because a host is based on CPU and RAM, random access memory. What does that mean? That means we've just destroyed space. Every location in the memory is equally distant from every other one. You can get to any location in, well, whatever your RAM, your cache speed is, a few clocks, whatever it is. The entire design, like that, our machine design, central processing unit, random access memory. You don't even notice. Step one, you've destroyed space. You said, space doesn't matter. But in fact, if you're thinking about security, work is the capital. The number one most important thing is get away. All defense begins with space. Don't be there. But we just threw that out by making random access memory. Everything is next to us. And that means, in the vast majority of cases, the first bug, oh, that's okay because we're gonna get rid of all the bugs, the first bug costs the machine. The first bug that allows you to divert or follow control once costs you the machine. That's the world we're living in. That's the world that we somehow think is reasonable. My dearest hope and sincere belief is that in the future, I don't know what, 10 years, 20 years, people will look back at this and it will be a dog. They will not believe it at all. You're saying everything went through one tiny little bit of space, this CPU chip ramp up. Everything, not just the low level hardware configuration, but also your most private medical and financial information. And all the YouTube videos and the scum of the internet, all in one place. Same place, sure. And yet, here we are, live. Here we are, believe it. It's crazy. This is the world, I use this slide when I was talking to the, well actually, I can't say who I was talking to. I was talking to the guys in the DC. And so this is the picture. You've got physics on the bottom. This is the sandwich of computer. I call it the glass sandwich. Guess which part is the glass? Yeah, it's the software. It's the stuff that we do in computer science. Electronic circuits, we don't think about this, but digital electronics is incredibly redundant. If you build an analog computer, you can easily send hundreds of different voltages down a wire with pretty good recoverability. Yeah, that would be a little noise, but you can send hundreds of thousands of different values on one wire. And yet, for digital computing, we send one bit. That's in fact how digital logic is made. That's in fact how the hardware determinism guarantee is created by using massive redundancy. One bit per wire and putting amplifiers everywhere to regenerate those bits before the chance of getting it wrong has gotten above 10 to the minus nines, however, whatever it is. So we paid all this incredible redundancy, but we forget about that in computer science because we start with the reliability to take it for granted. Instead of thinking, okay, hardware did some of the job. It had some redundancy in order to get some determinism and we should do the same thing. We should have some redundancy to get some more reliability in software levels all the way at the stack. But we don't do it. We don't do software as efficient as we possibly can, which means efficiency is just another word for factual, removing redundancy. That's what it means. Efficiency means making the smallest observation of the world that you can and doing the biggest action you can on the basis of that. That's efficient. The efficient sorting algorithms, that's what they do. They make one comparison and they move numbers of piles in the array that's being sorted. As long as you're absolutely guaranteed of determinism, it's a smart move. But in any other circumstance, you are maximizing the failure. You're maximizing fragility. I cannot drive that home enough because we're living in a world where there is no actual correctness, not seriously. Yeah, in, you know, intro programming classes, intro algorithms, you can talk about correctness. When's the last time you were confident that you got the correct answer from a Google search? It's not even a lot of fun. There is no correct answer. There's no specification. If there is a specification, try to do something useful. That's the specification for all software, remember? Software provides desirability. That's its only purpose, is to provide something useful. If being correct or being a little more correct helps with being useful, then great. But what's really important is the utility and not the correctness. All right, and then it's even worse, right? In data centers, at least, you have data center administrators who were trying to cover up for the fact that software is crashing all the time by having duplicates and hot availability, failovers, blah, blah, blah, blah. But for the rest of us schmoes, we got nothing. All right, so, we have this incredibly locked in mindset of correctness and efficiency owner, which permeates not only software, but it permeates computer architecture. The idea of hard work provides desirability is deep. How are we gonna get at it? Von Neumann said we should change, he thought it would happen when chips had 10,000 transistors pushing a billion and we're still doing the same old story. Hardware determinism, correct and efficient software. I've beaten this thing pretty hard, I'm gonna move on. But it is important to understand that some of the things that we are most proud of in computer science are in fact implicated in this failure, implicated in this inability to make secure systems because we cannot find the last part. That's not a serious proposition. And yet, we make everything designed to make the consequences of a bug maximum. Strict correctness is among the most pernicious ideas because by saying that correctness is a boolean, that means that all incorrect programs are equally bad. There's no part of correct. You forgot to put a period at the end of the sentence, get a zero on your final. That's strict correctness. And it creates a research desperate around strict correctness because it makes it seem like there could be no useful distinctions to be made among incorrect outputs. And that has entered the development of alternative ways of thinking. In deep deep ways, there are little ways that we get around it, but most of those research ways don't gain much traction. All of these things, binary numbers, incredible. Computational universality, one machine, all programs, we think that it's so great. And it is, but then there's the second set up where whenever you think of a new machine, a low level made out of electronic relays with ice cubes, that melt, that laser beams, I don't know, whatever it is. The first thing we do is, how can we make it universal? When in fact, that's a terrible idea. Computational universality should come at the end, at the high level, when all of this low level stuff is reflex arcs, homeostatic, just keep things in mind, the least amount of programmability possible. Because the lower level that you move it, the more leverage you give away in the system design. So we mock humans for having to use pencil and paper to do arithmetic or something like that. Grabbing a bottle of programs, our universality is so high level. No, no, that's a good thing. Okay, so how are we gonna get past hardware determinism? My suggestion is, as Lance said in the opening, this idea of indefinite scale doesn't. Suppose the rule is, it's a game, it's a theory type game. You have to make a unit of hardware, a time that can connect with others of its own kind. And all you get, you get as much money to buy hardware, electricity, power and cooling, and space to spread these things out. That's it. And the one thing that you cannot do is you cannot change the size of the tile later on. You have to pick a mass, a pattern, 10 grams, whatever it is, up front, and you can never change it. And yet, this machine must a scale indefinite. I must be able to build these tiles from here to Berlin, from here to Pluto, and nothing changes. I never run out of address bits. I never have to slow down my clock because the light speed from here to Pluto means our clock speed is gonna be 100, but one hertz. So the problem is, hardware can give you any degree of reliability you want, but you have to specify that degree of reliability up front. And if we say indefinite scalability, then it's still there. So I want to be able to survive 10 to the 10 to the 10 operations before having a chance of failing. Okay, so we'll make 10 to the 10 to the 20th tiles and we'll fill up all of the Milky Way with that. And then we'll have failures. So if you play this game, strict indefinite scalability is the antidote to strict correctness. Then you realize, in the end, you have to give up on global determinants. You have to give up on global hardware determinants. Does that mean we're dead? Does that mean we should give up? No, it just means we have to put some redundancy into the software. In software, it's soft. So we can make some redundancy change as the system gets bigger. You can't do that with hardware, okay? So that's why I take indefinite scalability. If you're willing to play this game, it will lead you home to the new world of best effort computing. All right, so let's talk about what the alternative could look like. As far as I'm concerned, I really want to make clear that the important thing is moving beyond deterministic execution. How do you choose to do that? As long as you admit that hardware may fail arbitrarily, so you can't just say triple modular redundancy done. That's fine, you're all on the side of virtue as far as I'm concerned. This is just, excuse me, my particular opinion. So this is the alternative. Instead of one giant glass sandwich, we're gonna have a whole lot of little teeny tiny glass sandwiches, and we'll try to make the sandwiches as under glassy as possible. Minimize the software, make it robust, put some redundancy in it, and have lots of ways between the physics and something useful. So that we'll expect some of them to fail sometimes, and we'll still do something useful. Easy enough to just sit there and nod, say, well, okay, and program, code. What's this gonna look like? Let's take a look. All right, that's the effort computing where you talked about best effort may fail, hardware may fail arbitrarily. That makes the software job seem hopeless, except it's no best effort software too. The software can fail also, but if it does, it's gonna try to be close because we don't believe in strict correctness. So being close is better than not being close. That's what robustness means. Cherish, I'm sorry, correctness becomes a quality, not a requirement. What we have now says programs must be correct, and then as efficient as possible, and then as robust as possible. What we need is robust first, don't you? Programs will be robust, and then as correct as possible. It is efficient as not just one little rotation, but it changes everything. Once you begin down the road of indefinite scalability, you very quickly realize that you can no longer hide space. You can no longer make one gigantic random access memory on pain of the memory clock going slower and slower and slower because of light speed. And really, another way to take, what I'm proposing here, what I'm arguing for, what I'm pleading is that we need to put space back into computing, all the way down to five, not just inside RAM where you have an XY coordinate of the guy that you're shooting inside a game, but it's actually in a physical different place. And the consequences, what this guy can do depends on physically where it is in the machine. Why? Because there's another tile next to it who'd be different if it was over there, who'd be a little more different if it was further away. The re-spatialization of computing. All right, I said most of this. Really what it boils down to is indefinite scalability and then actually do the science and engineering to figure out how to compute this. We have zillions of fundamental intuitions that need to be re-engineered in this world. So the idea is instead of following the traditional path of architecture making the CPU faster, the memory bus wider, the clock speed faster and faster and faster, we give up on CPU scaling, we move to network scaling of some sort, and this is now our tiles that we can plug more in. Even while the machine is running, decisions still have to be made. There still comes a time when maybe these four guys need to agree on doing something. So they're gonna have to reach some kind of consensus. It's not like I'm saying there's no decision that has to be made. I'm saying that you wanna make the least synchronization, the least centralization to do whatever the job is. Instead of thinking we can solve it once and for all and hard, we'll make the hardware synchronous, no problem. In fact, what should be synchronized to something else? And what shouldn't is equivalent to the definition of objects in the object-oriented programming sense. And if we've done any significant software engineering, we understand that objects are really important. Deciding where one object ends and another object begins makes an incredible difference in our ability to manage complexity, our ability not to get it correct, but to get it useful, to get it correct or. Now of course, we're all pretending, right? We're all praying, praying this little phony games because the instant we take these objects, which is software, look at these distinct things and we put them in RAM, there is no object. You can go right off the end of one object and right into the beginning of the next guy, buffer over. It happens all the time. But once we have respatialized computing, objects become real. They actually will be spatially delimited and there won't be stuff in there unless it actually gets in somehow. Things that are close in space are gonna tend to be more synchronized because they're gonna communicate back and forth. They're gonna be talking protocols. They're gonna be saying, you know, hey, that guy is currently serving as memory remembering what part of the string we're done with or whatever it is. And in fact, that guy stops doing that. Our little micro-program is not gonna work. But we have to have ways to deal with it. We can't just say it was given to us by God and it won't work fine, we have to deal with it. This was the IXM hardware tile that I worked on two sabbaticals ago in 2008 that was to illustrate these basic ideas. Each of these CPU, RAM, flash, box in four directions, via serial lines to its neighbors. You could plug in as many as you could support if you had enough memory, had enough power and cooling to run. Here is the definitely scalable hardware of 2018. This is the current T2 tile. It's called, I brought an example with us. Here's one, got another one here. We could plug it in together and then we both just start up. The moment they don't really say much to each other because the software still needs work. We'll be working on that real soon. All right, so. The architecture of the work now is called the Moodle Feast Machine. It's a synchronous or un-synchronized, stochastic best effort. It's best effort buzzword complete because in fact defines the buzzwords for best effort in two. So it's got all. Now, the question I want to face is how would we program such a thing? We've developed languages. We now have two programming languages. The first one is called Oolong. And if you look at something like this and you read class for elements, then this looks like a little object-oriented thing. Actually, this code has a bug in it. But it's okay for our purposes. We're not actually gonna learn a lot from that. No, the event we know is the spatial data structure that represents our neighbors, our actual physical neighbors. And the event we know sub-zero represents us. We're in the middle of the universe because where else could we be? There cannot be a global origin. That would be over someplace in Alpha Centauri. That means I would have to have an index of 7,000 bits to say where I am. Now, everybody thinks they're at the center of the universe. Okay, so you want me to do the same thing. EW1, it turns out, is the guy to my west. So this says take me and copy me to my west. Symmetry is all this metadata means you can actually spin me or flip me before you apply this rule. And so in fact, I'll make a copy of myself towards the south-east or west. You put this in the movable feast machine and expose. It just makes copies of fork bomb in every direction as fast as it can. Some kind of rag it all over. That's a little more. The new language, which has not yet been published, the paper is in review now, is called splat. Splat stands for spatial programming language, ASCII text. And the point is, programming languages have this, you write them down the page, but they're inherently one dimension. There's some tension. You go to the end, you wrap around, you go to the end, you wrap around. But that's not what we need. If we're gonna have a machine that's actually scaling in two-dimensional space, we need to have a language that can represent two-dimensional concepts. And splat allows us to do this. The idea is, any line in a splat program where the first character of the line is non-black, then that is a sentential form. That's a conventional one-dimensional form. So this equals element north door. This is a header. It's a section header, like a markhead. Two equal signs of a subsection, three equal signs of a subsection. So after the element header, you can put the metadata, it's the same metadata here. And then you define rules. These things here begin with a space. That means they are not sentential forms in splat, but they are spatial forms. They are parsed in two-dimensions. And we look for rules, two-dimensional rules. We hook off of a little dash transition arrow. We look right to find a pattern delimited by spaces. We look left to find a pattern delimited by spaces. And so this guy says, if at sign is me, on the left-hand side of the rule, at sign is always the center of event with the BWC. It says, if we've got anything above me, then you can rewrite it to leave me alone and put a copy of me up there. So what north door does is it grows a line to the top of the universe. The equivalence Luan code is a little bit longer than this because this has a bug in it, it's similar. Except we've seen it through so many times. So the stuff I'm gonna show you is written in splat. And splat has a convenient little, if you put a U at the beginning of a line, it means the rest of the sentential form is Luan code. So we can drop back down to Luan whenever we need to because splat is really about expressing 2D patterns in a compact form. Why? Because it matters. Software engineering tells us the language matters. I've tried to do what I'm gonna show you on and off for decades. And I can never get it to work. A year ago on sabbatical when I first implemented splat, which is implemented in Perl by the way, after I got splat running, I got the thing working in about a month, where working has an ass to it. But it's an important advance over what internet is. Here's some more complex splat code. This is a rule, here's me, and it says if I have basically the alphabeticals, you can declare them to be any type you want. And you can put predicates on them and say various things. So this says if I have a bunch of empty spaces to my left, and an outer membrane above and below me, and three inner membranes next to me, then I can rewrite it to take over that empty spot to the new bit of outer membrane, me and inner. See how it works. I'm carrying this thing around. Is it doing anything? I didn't do anything. But I felt like, sorry, that was just a little bit. All right, so here is a one tile. The gray square is one tile. We would like four tiles in general. There's, that's what it actually looks like. Now if we take one of these guys, this is an element of content, and what it does is it reproduces itself automatically up to a total population of 64. And it counts how many generations it's done, and when it gets to 64, everybody's left. Now it went up to 64, and now it's going back down again because content needs to be around other content. Its purpose is to make a little glob, a little distributed computation, and they need to stay within communication range of each other in order to do that. And the communication range is teeny. Well, let's suppose instead of having a bare content sitting in space, we build, oops, we put a membrane around the content. That's what this is. The darker blue is outer membrane, it's elements, the lighter blue is inner membrane, and the green stuff in this case is content again. It's growing up, oh it's actually reached 64. And now it's going to sit there. And this membrane, this proto-cell membrane, is completely stateless. It's just a handful of those pattern rules that I showed you in this slide. And the key part of it is that the rules say things like, so right here, for example, we've got a little edge, a little corner, and the idea is if the stuff inside is crowded, if the density inside is high, we'd like to have the membrane just spontaneously move away a little bit to make a little more room. And if the density is low, we'd like to have the membrane move in and keep it from getting too sloppy because the content needs to stay in touch with each other. So the rules do things like say, if we have a little corner like this, we can fill in this empty square. So at the moment, this thing is just sitting here doing nothing. It's being a proto-cell. The content, as I mentioned, its goal is to actually perform computations for us down the road. The idea here, this is bottom-up engineering. How do you write a program when you don't know what it's supposed to do? What you do is you make stuff that uses the hardware that you've got and recast it into increasingly useful things in general. How do you know what's useful? Every once in a while, you try a little top-down spike. Say, I'm trying to make a thing that'll do a lot of work. But the important part is not that particular spike, but how it stimulates the bottom-up quality really handy to have like a little cell thing that kind of holds itself together and be able to move, move in this thing, move. We have these commanders here, another kind of element that we can stick it in. And when a content is next to a commander, it soaks out and it adopts a random direction of a compass, a gateway rose, and a random velocity from one to seven, and it tries to begin moving in that way. Does it look like it's moving? It's a little, but certainly not very much. It's not like if everybody immediately goes into a log step synchronization. What happens is we just bias the statistics a little bit. So if we're picking a random flip into an empty spot and it happens to be in the direction we wanna go, it'll be a little bit more likely to take it. And if it's going against the direction we wanna go, it'll be a little bit less likely to take it. And the velocity determines how much we bias the statistics. So I don't know if this thing's actually moving much or not. It's fairly slow, but I don't care because this thing, we shake it down, we let it compute. So the important point, well one important point about this is in fact, all the content guys are gossiping to each other about have you heard a new command? Have you heard a new command? They share a counter that wraps around that says, oh I have a newer plan than you do. Oh okay, I'll take it. And you have to make a design decision about how big that counter is gonna be. How many bits? It's called the count to infinity problem in distributed systems. Because if you actually count so far, if you have so many commands in flight at once that it wraps all the way around, then somebody is gonna reject your command as being old rather than incredibly new. Do we try to find an exact solution to that? No, we admit that our local solution, our software solution, is going to have size limitations built into the hardware world, our software world. Okay, well we're gonna really see what's going on here. We need to speed this up because it takes a while and it's just slow in general. Once we have the ability for the content to make a distributed decision about what to do, like let's go east at three. I couldn't not ask the next question. This stuff that you're seeing now is what's actually in review and in paper. But now we've got actual new stuff that nobody can see except the super inside. Ah, what if we did this? What if we did a different command that instead of saying go west or go east at two, it said make a copy of yourself and have the copy go west and you go east. So what had been 64 guys would become 128 guys, but 64 of them would be going this way, 64 of them would be going that way. What would happen? Reduction, how hard could it be? So I had to try it. Well, maybe it's a little harder than anything. And in fact, I don't know how to do it right. But what I do know is finding out what happens when I did it wrong, a little bit of my mind. So this one I'm gonna show you it's about two minutes long. This is going, since it's going 455 years, it's actually going about 1,000 here. 1,000 events per site per second is what we're gonna see. But I'm gonna plop down four of these little cells that I've now modified. So instead of just necessarily rattling in one direction or another, they may actually decide number one to die or they may decide to double their number and go in opposite direction. We'll get it back every once in a while. They decide to grow this guy. So when they turn red and green, they're in reproduction mode where the red one's in the green one's going in opposite directions. Or they're trying to. Just grab, you can do some more and more on this. This guy here, he's a bug. The thing that I love about this is you can pick any one of these cells and just say, what's gonna happen to you? And the amount of plot that's going on here is just absolutely possible. There's so many stories happening in here that you can actually find at the beginning, a little in the van. What's happening to this guy? He's that other guy. Oh, they merged. That's it. You're not on a split. That is, as far as I'm concerned, everything about life that maps except there's no evolution. It's the same, the creatures are completely making random decisions. Nothing changes about whether when they merge they don't act differently. But the idea is, this is the way you build what? You build it bottom up. And to add evolution to this, it's gonna be hard not to make it have a slightly bigger probability of going one direction for the other and have that be written on the content. So the content that happened to move with one are gonna tend to go in one direction, center of the center. I know it sounds crazy. Like Lance said at the beginning, if it wasn't crazy, I didn't really have enough time to work on it. So it also sounds crazy, but I need to retire and have time to work on my research. This is it. Thank you so much for listening. Trying to, like a two dimensional programming. Like what motivated you to come up with a new programming language that has like space also represented in it? I mean, already spatialized, like a cellular time. And I had students last semester that were trying to cope with it with me about how to actually do software engineering. How to actually write code. And being able to manage the complexity by making a small bit of a new language. I mean, you write the splat code that like this, this rule by itself generates probably six Oolong classes. Each of which generates a thousand to 2,000 lines of C++ when this thing actually all compiles. So it's all about managing complexity so we can climb higher. The machine was already 2D, so I made a 2D language. What kind of different geometry do you think that? Yeah. Yeah, so, well 3D is the number one question up on the board. And my answer is, you know, depending on what mood I'm in, I say, you know, we need to crawl before we fly. Or I say, I need to actually reserve one dimension to build the thing and to fix it. So you have, you know, imagine if you have a three dimensional computer, how are you gonna actually fix something in the middle of that? It's gonna be a bit of a challenge. And so fundamentally, I'm just keeping the third dimension in my back pocket to do other engineering. I think it would be relatively easy to imagine taking a 2D model like this and having a finite number of layers, sort of a 2.1D model, where there'd be a little local communication up and down and then have it be scalable in two dimensions. And I think that might in fact be quite powerful. Beyond that, you think about things, well, what about wraparound, torus, connectivity, we're non-completing, we're... I say, well, you can do it if you want, but you have to respect indefinite scalability. Our world is 3D. And you can make little tricks to make toruses embedded in a thing, but it has other consequences. What are your questions? Given the relative newness of quantum computing, how does that fit into, are we just doing to repeat the same mistakes that von Neumann did, or have we opened a door that... I sure hope not. But we probably will at first. And part of why I am focusing on computer architecture is because we need to have an alternative that people could look at instead of Tic-Tac-Cone. Or whatever it is. They could do some kind of signal processing something. Now, I'm a bit of a curmudgeon in the end. Quantum computing is going to provide huge, constant factors. Once all error propagation and decoherence is taken into consideration. So we'll use it because it will provide huge, constant factors, but it won't be the magic that will occur. So in fact, I think it'll feed into this sort of thing as just a good substrate to use. So what would you want to do if you wanted to prevent information leaking in a robust first computing system? Because it seems like robustness is at odds with information. Yeah, so part of robustness is making lots of copies of things. That's how you make it be robust. Well, but then how do you keep track of who's got coffee in over shares? And it's much like trying to keep track of your DNA. Good luck with that. It's kind of this column of DNA sheeting up to you everywhere you walk like that. And that's going to be sort of more similar to it. But the answer that I want is, is that what we're going to do is we're going to take the information that's sensitive and we're going to put layers and layers and layers of stuff all around it and whoever wants to steal that is going to have to conduct a ground war across the grid, tile by tile by tile, trying to get to where the, and the tiles are going to be doing their best to only communicate by protocols that are going to be as weak as possible as not computationally universal as we can make them to limit that flexibility. One last question. Is SWAT in your PPA yet? Is SWAT? In your PPA yet? It's not in the PPA, it's in the GitHub. You can get clones, SWAT, but it's rough, you thought the long was rough, huh? Like that though. Thank you. It really is very cool. Again, thank you all so much.