 Thanks for staying, everybody. This is the title of the paper. I kind of made it a deliberately born title, because I think it's actually really bad ass work. And I don't want to have to kill the lily. But I'm not going to really talk about it. I'm going to spend the time giving the context behind the work. And then hopefully you'll be propelled along. Really bad one when you get to it. So I've been working on this area for quite a number of years. And it's because I have this belief that computation, the manufacturing computers, the way that we do it today, is deeply broken. And it's broken in the following sense. The whole idea of hardware determinism, that a computer guarantees a hardware guarantee, same program, same input, you're going to get the same output. If you do it again, get the same output. That idea is training wheels for computation. And we've been keeping the training wheels on our computer architectures for 70 years. As Inman was saying, real living systems, they don't do that. Living systems expect to deal with errors and uncertainties all the way up the computational stack. We don't. We think hardware is going to take care of it. And therefore, software acts like this incredible pre-medana where everything has to be perfect. Wait, wait, wait. Six, you're welcome. This is software. And if anything goes wrong, software throws a tantrum and says, I'm in my trailer. We have to get beyond that. The flip side of it is that doing it the way we do it, we focus on efficiency. All we care about is correctness and efficiency only. CEO, correctness and efficiency only. That's considered all we need for software. But in fact, what we need is robustness and working pretty well. We can have that. But we will not have that because we're stuck in the correct and efficient only attract. So I suggest the future of computation is this idea of indefinite scalability. We won't have a CPU. We won't have a central processor. We will have oceans of little teeny processors. They're interacting. They're failing. We're adding more to the thing. We're building the machine while it's running. And software, the entire computational process, will not guarantee you get the right answer. That was a crap guarantee anyway, as Inlin also pointed out. Instead, we will have best effort. I will do my damnedest to get the answer. That's pretty good. And that's all you could ever really guarantee anyway. So this is my mission. Fix computing, make the world a better place. We have these 100 million bank account credit card accounts get lost, 350 million in Christian fleet. We think that's one of them. It's crazy. We are living in craziness. So I want to tell you what I've been doing for the last day. Started in listening with hardware. These are a little computer tiles that were marketed briefly under the name Aluminado. And it's not going up. And you get the idea. I did not pick the name. That was a marketing name. These individual little tiles that plug together, they share power. They share communications. You update the software in one of them. And it goes hop to hop to hop to hop until the whole thing is updated. I wrote this operating system for this. It was great fun. They had their limitations. They were sort of a 2000 era cell phone CPU panel. I went and looked to the software side of things. What are we going to put inside these little tiles? And the answer was going to be something like a cellular automobile, but not the traditional one. Number one, it cannot be synchronous because we're adding new tiles all the time. They run different speeds. We cannot wait for the last tile in Pluto to finish the tick before we go on to the next one. We cannot assume determinism. So the game of life is playing God. We have to figure out how to compute without that. So these are the things, these are examples of little sort of, these atoms would connect up and make little chains and there was a copier so you could actually reproduce the chains. And it was very organic in the sense that it didn't even wait for one copy to be complete before it would start copying the copy that's copying in progress. Why? Because there was room. There was space. You could get another guy in there, which has happened. At the same time, so that was hardware. That was software. And then there was advocacy because I realized that this correctness and efficiency, only idea is so deep in our brains, especially in computer science that we don't even realize this determinism. I can't take time on this, but this is a sorting hour. Imagine sorting where the comparisons might be wrong. Just too bad. Now our beautiful quicksort, merge sort, the pinnacle of computer science sorting theory. They do terrible. Why? Because they're designed to exploit the hardware determinism. Whereas bubble sort, pitiful, horrible, black sheep, everybody hates a bubble sort is great. Why? Because it doesn't up the leverage. It compares things redundantly. It only moves them little bits. So if it gets it wrong, it doesn't move them very far. The bottom line here is that efficiency and robustness are at odds over redundancy. Robustness requires redundancy. Efficiency eliminates redundancy. So you've got to be careful when you motivate your work, especially if you're in a life on the grounds of efficiency. That's a red flag. We all do it, but realize you're vulnerable because you just said, I just made my system fragile right there, wherever it was that you're claiming that efficiency is the result. So we invented programming languages to help us express transitions in this asynchronous, not deterministic, cellular terminal language. Here's an example of Fort Bomb. You put one in the middle and you let it run. It grows, it goes crazy. Okay, well, big deal. Once you start having a little more complex transition functions, the code isn't quite as simple. We saw the work that's actually new, the work that you can read in the paper this time is a language called Splat, which stands for Spatial Programming Language Asking Text. And the idea is you write little patterns with characters, Emax, picture mode, saying, that sign is me, the dot is anything. If you match that pattern, you can replace it with a me in another copy of me. There is a Fort Bomb element. And you can very quickly get to more complex ones. These are rules that allow us to make a pair-wise, a two-layer thick line grow in and out on the basis of the density that's inside. And we use it for making a simple cell membrane. And it looks something like this. A seed, it sprouts the two shades of blue or the inner membrane, the outer membrane, and it just goes like this. And the idea is the membrane has no state. It just says, go away from high density, go toward low density. And as a result, when the insides move, the membrane moves with it. So is this real? Is this a simulation? We're now taking the simulator, putting it into physical hardware tiles and letting them talk to each other. Is it real? Is it a simulation? Of course, it's both, depending on how you look at it. If you thought more of my time. Thank you so much. Thank you very much. I thought there was a really fantastic talk. Thank you. How are you going to convince people that deterministic hardware and software can be, if you want to improve with this new approach? I mean, people like to blame things. They like to blame other people. They like to tell you, true. Yeah, it's something to just make a responsible for mistakes, for errors, for... It's been great for the software people. Anything that goes wrong is a hardware problem. My hands are clean. And we have to renegotiate that boundary. And that's what best effort is all about. Hardware promises its best effort, but it reserves the right to fail and fail or be shareable. So software has to carry some of the load. Are the people going to like that? No, tough. So exactly how we get to it, I don't know. My goal is to get as many people as I can, as excited as I can, about wanting to work on this. About thinking that there's an alternative to what we're doing now and see if we can't get to it. Next question is for you, Robert. Go ahead. As always, I think maybe my question is going to be reminiscent of the questions I've asked you before, the talks that you've given that are similar to this one. They're all the same. But going back to Inman's point, one of Inman's points, I think is that even though software appears to be to adhere to the CEO paradigm, in point of fact it doesn't. It depends on the software. Because it's failing all the time and we're living with software failures all the time and we do get on the plane anyway. And there's various versions of software failing and dealing with failures that are out there in the world right now that aren't a result of engineering a new software the right way, the way you're doing it. But rather, you know, we have internet protocols that are waiting around for packets. And so I'm wondering if there's an ongoing software engineering evolution that's kind of creating a path towards what we're talking about without doing things the right way with the ground up. Oh, that's great, yeah. There is, and someone was talking about the signal GP talk the other day, the whole idea of going to a defense rhythm, if you make the chain between input and output be as short as you possibly can and then lean back on the state of the living environment, then you have less at risk. You can still, obviously, you can use adventure and stuff to do towers. I know I am an incredibly hard relevant system, but by and large, you get less dependencies with shorter program changes. So that's the example of how it's happening anyway. My concern is, is that it's happening so slow, the urge to domination, the urge to compute as a dictator who will top down, must go like this, must go like this, is hard to resist unless we understand the costs. So I'm trying to plan a flyer all the way at the other end and say, look, it could be this, we can actually compute this one. It's not too advanced, but it's unbelievably tough. I think we're gonna take one last quick question from 12 of us. I'm computing, I have a minute. Quantum? I'm computing every week. I am, you see, came from university and they were all about wanting to train around the airways and stuff like that. Maybe the proposal doesn't say you don't need to do that. Actually, you just, you know, take advantage of it and just make a few different. I mean, the A&M could actually make a big contribution to developing the architecture of quantum computing. Yeah, that could be. I, at this point, I have to confess that I am a quantum curmudgeon and I actually believe that once air containment and decoherence are properly accounted for, that we're gonna get a quantum as huge amounts of fat. Oh, we're gonna use it as huge amounts of fat like we're nice, but it's actually not that measurable. And so it's our bottom to this issue that it would be nice if instead of using our quantum computers to do a nanometer, like Tic-Tac-Co, we thought about doing something more like this, but it's a separate issue. So, plenty of other questions. Unfortunately, we have to end it here. There's gonna be an announcement. If there is any one last question to run on the very, very last session of the day, there's a workshop and the day can't be there too, so it's gonna be one of these where there's a special meeting. So, I have questions for you in the coffee. Thank you. Thank you.