 Everybody, thanks for coming. It's great to be here. I feel I should start with a warning. I really hate when people talk and it turns out they have an agenda that they didn't disclose upfront. So I want to disclose my agenda up front. For the last several years I've been researching, exploring and advocating for a much qualitatively greater degree of robustness in the computer systems we use in the world, in society today. I think the way that we're building computers is beyond crazy. And I think there's a better way to do it and a better way to do it is the sort of robustness that SFI knows a lot about. And so I try to maintain a neutral point of view as I give these talks, but I sometimes inevitably slip into excessive partisanship. And since I am going to be saying some tough things about some of the things that we use quite frequently in science and engineering, I urge you to call me out if I step too far and it will help me improve the pitch. So with that said, I believe every talk should begin with the creation of the universe and work this way forward. So today, the beginning of the universe is the purpose of science or the purposes of science. And in particular, I want to make a distinction between science in service of knowledge of what exists and service in service, science in service of building what does not exist. Science in service of understanding what's out there and science in service of engineering new things. And we'd like to think that those are two tracks that travel in parallel, that if you learn more about what is out there, you can use that in some way to engineer stuff that is like what's out there or modification on what's out there and so forth. And many times you can. And most of the history of science, I feel that scientists are typically thinking I'm trying to explain what's out there. That's my purpose. And then somebody else engineer, if they come along and take my theory and try to build something with it, well, that's on them. My job is done when I explain some piece of data. I predicted a piece of data, something like that. However, when it comes to computing devices, in particular when it comes to computer architecture, the way that we frame our models, even if we just did it for scientific understanding of what's out there, can make a huge difference in the kinds of devices that we are then able to build from that model. And when we're building models to explain stuff out in the world, we can do anything we want. We can assume there's periodic boundary conditions. We can assume the whole thing is synchronously clocked. We can assume it's perfectly reliable. Whatever works because whatever model we make is justified by the fact that it does explain the data. It does make a prediction. And then we're done. But a lot of those assumptions, and these are the four, the hard way of the title, these four assumptions that are very frequently made that come around and bite us when we use them not for science, but for engineering as the basis of computing models. So that's the aggressive take that I want to do. And my claim is, is that the assumptions that we've used to build computing machinery are great to get started and are running out of gas. Computer systems are no longer growing like they used to be, not getting faster, not getting bigger. And they're completely unsecureable. So much so that we are absolutely living in crazy land by thinking we're going to deploy this kind of technology on more and more and more responsibility. Stuff that has more and more kinetic power behind it. And I was reminded, you may have seen this from John Oliver on Sunday where he's making a point that, you know, we got the whole Apple versus FBI thing that is supposedly about encryption and ultra strong and breakable encryption. And John Oliver, because he's great. At the end of it, he's making the point that really Apple would be more honest if they admitted that the encryption argument is basically about putting a medical lock on a door made of Kleenex. Because the entire model, the entire architectural model is so fundamentally unsecureable that if you can't go in through the encryption, who cares? There'll be a bug here, there'll be a bug there, there'll be a bug otherwise you just drive right around it. And so this was the payoff at the end. After we saw all the Apple engineers working feverishly trying to catch up with the bad guy hackers who are getting in, where we are is don't worry about that. Keep dancing madly on the lip of the volcano as we're about to fall into this unbelievable security disaster. And that's indeed what I believe is the case. All right. So yes, jump in. And what did they say? Yeah, it's it. Well, it depends. I mean, total disaster, snicker, snicker, like that. Yeah, yeah, that's right. You should definitely have good encryption. Don't look at this hand like that. And I'd like to give some suggestions as to why it is a disaster, number one, and that there is an alternative. There is actually an alternative. It wouldn't matter if we were in crazy stupid land if there was no other choice. But there is another choice. It's just we know almost nothing about it. And the reason we know almost nothing about it is it's almost a dual to the approach we're doing now. It's insufficient to change one assumption. You change one assumption, it looks strictly worse than doing what we're doing now. You change two assumptions, it probably still looks strictly worse than what we're doing now. You have to change a whole bunch of assumptions all at once. And people are naturally reluctant to do that. Okay, here are the four assumptions I want to talk about. Determinism, centralization, closed boundaries, and synchronization. Those are all hallmarks of the way we do digital computation today. And there are hallmarks of many theoretical and mathematical models that we use for all kinds of useful purposes when we're using them for science, when we're using them for predicting, for explaining and predicting what's out there. But then when we turn around and use them as a basis for designing machinery, that's where we go wrong. So the alternative I'm going to suggest is best effort computing. And the idea is the number one thing that we know about computers theoretically is that if you wait for them to finish, they'll give you the right answer. And if they can't give you the right answer, because something's gone wrong, they will crash. That's what determinism means. Same input, same output, guaranteed or a crash. Those are the only two outcomes. Okay? And that's the way computers have been designed since von Neumann since before. And the beauty of this is that it lets you completely ignore physics. Let's you completely ignore reality and just live in the land of logic. Just live in the land of modus ponens. And it's so happy, and we're good at that, relatively speaking. Absolutely. Absolutely. Yes, yes. This is what we call separation of concerns. And we think this is a brilliant, smart economic move. And it is. Up to a point, until it isn't. And the suggestion is we're getting to the point, or from some points in scalability, we're getting to the point where it's not such a smart move. And from security, forget it. We passed the point that it was a smart move in the 60s. Long before anything like the internet was a gleam in anybody's eye. That's not true, actually. Because there are these things called lasers and heat guns, that we attack the hardware. Even if the program is absolutely perfect, we'll just come in and selectively flip some bits. Let me say this. I agree with you up to a point, but fundamentally, the division of labor between hardware and software is broken. So just to say the problem is on one side or the other is missing the fact that the contract needs to be renegotiated on both sides. You're accepting that the way hardware works is okay. And I want to try to convince. Oh, let me go on and see if I can convince you that the way these assumptions interact means software has to come out essentially as crappy as it is. That to think that that's just a matter of lazy programmers or bad managers or something like that is really missing the bigger point. That it's impossible to write software that doesn't suck in the sense that you're thinking of. Okay. So I want to go through each of these things. Okay, I've got a clock right there. When did I start? We don't know. All right, 12.15. So I'm going to end it one-ish or something like that. All right. It could happen. That's why I put the conclusions at the beginning. All right. So there's an easy way and a hard way. The idea is to counterpose them. So the easy way is this idea of determinism. And the basic pitch is we're going to divide the world between hardware and software. The job of hardware is to turn physics into logic. The job of software is to turn logic into money. And you have to do both. You have to get all the way to the end. And there has to be enough money to pay for the hardware and the software. And that's the deal. And it's worked great. And the way the deal worked is by software providing guaranteed reliability. That's what hardware determinism means. Hardware guarantees to either give you the same output for the same input or crash. And if it does neither of those, then hardware has violated the contract. But if it does give the same input for the same output or crash and anything else goes wrong, software has violated its side of the contract. That's the way the new association worked. And it's been great. One thing at a time, step by step by step, serial determinism is how all programs, all computers work since von Neumann through to today. And a bunch of people that were involved in it. Now there are problems with determinism and people have complained about it. Not least of which was von Neumann. And von Neumann said, you know, you have to kind of read between the lines. But really von Neumann basically said, you know, I made this von Neumann machine with the hardware determinism. Not because it was such a great idea, but because I was sure it would work. I was sure we could make it work. And in the future, he thought in the near future, we're going to get rid of hardware determinism. And we're going to realize that operations will have to be allowed to fail with low but non-zero probabilities. In other words, the software was going to have to recognize the fact that errors might get delivered by the hardware without crashing the machine. Okay? And he thought this would happen by the time we got to 10,000 gates, 10,000 switching organisms. The thing that, you know, I drank the deterministic Kool-Aid for a good 40 years, a long time. And it took me a tremendous amount of learning, which is why of course I end up being sort of, you know, strident about it now. Nothing like the reformer. We think about the mission of computing, writing programs and algorithms to be efficient. But that's what it's all about. But at the hardware level, it's unbelievably redundant. When analog computers were around back in von Neumann's early time, you could get a useful result from a machine with seven amplifiers. Every gate is an amplifier. But you could not do error correction. So now we say, let's take an entire wire that could easily hold two and a half significant digits and hold one bit. Incredibly redundant. Let's put an amplifier every time the wire takes a turn to regenerate the bit before it's got hardly any chance at all of actually getting far enough to flip. Digital hardware buys its reliability by an incredible act of redundancy. And then we forget that. We get to the logic layer and we say it's all about efficiency. It's all about eliminating redundancy. And that, I argue, is in fact part and parcel of the problem. And part of the fundamental reason why the hardware-software boundary has to be renegotiated. Go for it. Yes, please. You have to pay for it, though. So. Increasingly, it's interfacing with the engine in your car. Well, okay, that's that. Yeah. No, that, okay, that's interesting. But my view is that I want my language and software to be efficient because I want to grasp it. And so they're redundant. Not necessarily. I mean, it's much easier from some points of view to understand a linear system than it is to understand a logical system where it can slice the certain problem space up into arbitrarily tiny little hypercubes and do completely different things here versus here. And it'd be much easier if you had a system that was 90% linear that just had a few little twists and places that you could get to know. So I think we've become brainwashed by logic. And it's taken us away from thinking about statistical inference rather than logical inference. And intuitions could change, I think. Chris, I agree with that with two footnotes. Footnote number one is that using software to set these capacitors and transistors and wires is, in fact, much softer than using a wire and a piece of solder in terms of the odds of that thing going wrong. And that matters. And number two, using software allows you to do this virtual soldering after the device has been sold. And that's key. So you make the same device once and you sell it to everybody. And you can amortize a $7 billion chip factory and sell the chips at a buck each because you're going to sell hundreds and hundreds of billions of them, whatever it is, like that. And you can't do that if you're thinking of it in terms of virtual hardware. We need to recognize there's a spectrum from hardness to softness. And we, in fact, want to get more gradations in there. Living systems have all sorts of degrees of a little bit of plasticity here and a little bit of homeostasis there and so forth. This Boolean, it's either completely hard or totally soft, is part of the problem. Okay. So this is the big one. If we are going to reject determinism, that means our computer may say 1 plus 1 is 3 and our software is supposed to deal with that. Or worse, we say if x is equal to 0 and x is equal to 0, but it just decides to go the other way anyway. What's your program going to do with that? How can we even imagine programming that could do anything reasonable with that? So the suggestion is if we give up on hardware determinism, the replacement is best effort computing. Best effort comes from the land of networking, telephony, where best effort networks do not actually promise to deliver your bits. The classic example is the postal system. The postal system makes no guarantees how long it will take your letter to get from A to B because it didn't know whether you were going to send one or not. It didn't save room on the truck for it and it might just have to wait for the next truck. The postal service delivers your mail under best effort conditions. I'm suggesting we have to take that notion of best effort and bring it inside the computer in the interface between hardware and software. Yeah, right. Are you also, I mean, you could have met, there are certain mission critical applications like medical devices. Absolutely. Are you suggesting that even in those you move to a best effort? Especially in those. The idea is going to be that we currently pretend that you can't even talk about an algorithm at all unless it's correct, unless it actually does what it's supposed to do. And if the program is incorrect, talking about whether it's efficient or not is completely stupid. So the problem is that today all of the software in this room, essentially, you can't say whether it's correct or not because there's no spec, there's no specification, there's no formal document that says this is a legal Google search and this isn't. And the fact that between the time you started typing the query and the time you hit enter, 7,000 new pages were added to the web. You should be able to find them, right? No, because there's no spec for correctness. So we're living in this emperor's new closed land where we pretend that algorithms must be correct to even talk about them and it's really not even possible for them to be correct. The suggestion is when we move to best effort computing is that we are going to be survived. We are going to be close. We're going to give some kind of answer. And for things that are mission critical, we are going to overstock the hardware by a redundancy factor of 30 or 70 or whatever it is. So the chance of it getting a significantly wrong answer is as small as you like, but we're not going to guarantee it because we can't. That's the idea. Right, this is what I'm saying. I mean, in computer science, the idea of correctness basically means you have a program and then you have another description of what the program is supposed to do, written in logic or mathematics typically, and proving a program correct is showing an equivalence between this actual program and this theoretical description of a program. And if you can show that, then you have proven the program correct. So in that case, it would be well-defined. If we agree that this is the spec, the problem is these specs don't exist. So you're right. A program is just not necessarily correct or not, unless you have the spec that you can make this alignments to. Well, and this is the real problem, that program verification was something that was really hot in the 70s and 80s and they got up to 20 or 50 line programs and it was really, really, really hard to go beyond that because the intractability of the thing is so far beyond mere polynomial problems or even exponential and so forth. So here's my proposal for the new contract. This shall govern all hardware and software in the future. Hardware shall make its best effort to operate properly, but it doesn't guarantee to do so. And even the way that it fails is uncertain. It's not that it promises to have Poisson errors or IID or no more than two bit errors or anything like that. That way it fails is unknown. You cannot box it and factor it out. It's always there. Have a nice day. This is why you get it. This is the hard way. And the idea is if we take these tough decisions up front, if we are good enough, we could make software anyway. If we're good enough, we could actually make stuff that'll compute on top of garbage hardware like this. We just don't admit it. Yes. And that's important because again, the purpose of an interface is to assign blame, right? And so if the current interface says hardware guarantees reliability, that means if it's not, it's hardware's fault. So there's no pressure on software at all to sweep up after hardware that they may occur. Nope. It's unknown. It's unknown. And you go, okay, so that might as well just shoot myself, right? Because there's nothing I can do. You have to go to church all the time. Sometimes you're best, not good enough. And that's the point because the flip side of it is that software becomes best effort as well. Software no longer has to pretend it will guarantee correctness either. And so if despite all our best efforts to cover these weird error patterns and these long tails and these black, blue, green swans, whatever it is, if the hardware still screws as harder than we're expecting, we get to die. And it's okay to keep the software guys from just applying the surgical minimum amount of redundancy to hide whatever we admit the error distribution is and then go back to their jolly determinism like that. Well, I'm not sure if that's even technically true if you consider, for example, a shotgun or a taser to be a physical process. That could happen to hardware. Right. We can make pretty much any kind of error we want once we get malice in there. Was there another comment? The mapping from input to output might not be the same twice. It's going to come from some properties of the physical system itself. We're going to try. I'm sorry. The word have to have. I'm not sure about you want to have absolutely. Well, I mean, the point of it is, is that hardware can fail. That's just reality. Right. And so it may be the case that between input and output one and one does produce three. And that may cause the answer to be off by a tenth of a percent or by a billion. This is just admitting that. So you ask, where does the extra information from? You could say it comes from noise, but it comes in. And what we're doing here is we're loading up software's job to admit that. And maybe if software's got an extra microsecond, it could run the thing again just to see if the answer comes out the same. You know, is it still one and one and three? How hard is that? It's not hard at all. If we have our architecture set up to number one, make it software's fault. If it doesn't check its work. Don't we teach that in elementary school when you're learning how to multiply? You're supposed to divide and check. Computers never check their work. That would be stupid. The answer has to come out the same way or else it's hardware's fault. So absolutely. Absolutely. Yes. But we have brains. So there's no question on it. Yes. The second question is, are you going to build a prototypical system, you know, an indefinitely scalable architecture, a toy system, which has these ideas and shows that the performance is better than the current? Or the robustness. And we have to be careful how we define performance. But yes, but once we define performance in a way that favors what we actually believe, which is being either right or close, no matter what kind of horrible thing you do to the system. Absolutely. And I'm going to skip through a lot of the secondary arguments here because I want to do lots of demonstrations where we can see examples of this thing doing various stuff. It may. And in general, if we have enough money, mission critical, then we're going to have to come up with a correctness cost curve to say how much it matters to get the last bit right. And right now, we just say you have to get every bit right or else it's 100% wrong. And unfortunately, that's not tenable. And it gives no derivative. It doesn't tell you where to armor. When in fact, and I've got a grad student working on this, is that inside typical programs, you can take certain operations and armor just those certain operations, there's sort of moments of high flex in the transformation that's happening from input to output. And if you know, if you admitted that close was better than perfect was better than far, you could say, Well, what I really ought to do is do this comparison 10 times. And then don't worry about all those other ones. But if we say you must have down to the bit correctness or your crap, there's no derivative, you can't find anything there. Okay, so let me rush on a little bit so that we can see, go for it. Yeah, yeah. Maybe, for example, we will have different voices. But we that doesn't necessarily compromise our ability to interpret the word. So there is an actual packing of words using variation sounds. I see what you're saying, right? Error correcting on an ambiguity, right, right, you've got some range of things that you can do. You've got, you'd like to have common utterances be short. And but how can you do that? So there's still be clearly distinguishable. Well, you'd like to have a confusion matrix that told you more than did he say it exactly right? Or this word is just different enough. Yes. For example, yes. Okay. So centralized control, another one of the things that is what we do. The way we compute is we have the central process unit, the CPU talking to the RAM, the random access memory. And it sends addresses and gets back instructions to execute and data to operate on. And I call out this one thing the program counter. That's one tiny, tiny little register in a modern machine. It's 64 bits long because it's the size of an address. The program counter specifies the next instruction to be executed from anywhere in memory. And the real world of security is if you can divert the value in the program counter once under external control that wasn't expected by the programmer, to first order, you can always take over the machine. So talking about correctness and periods of high nonlinearity in the transfer function, the program counter is unbelievably nonlinear. And a tax on security attacks inevitably end up at some point diverting the program counter to some place it wasn't expected to go to. And then bad things happen. And from the moment of that first diversion, everything else is just icing on the cake. It's finding the way to make that diversion. Now, seeing a block diagram like this gives you the impression, oh, this seems reasonable, we got these two things, they're talking. But this is not what it actually looks like in the real world. So I went digging for the statistics on this particular laptop, my actual laptop, to see what this look should look like more to scale, and it looks like this. In terms of the number of transistors assigned to the different functions inside the machine. Here's the CPU, it's got 625 million transistors according to Intel, and this is eight gigabytes of RAM, which is what this thing has got, which has got some 68 billion gates in it. Now, again, if you get into electrical engineering, these things are not really transistors, these aren't really transistors either, there's all kinds of fuzzy stuff, but you get the idea. The CPU is this unbelievable bottleneck. Every single thing in here cannot do anything to first order without going through there. The thing is incredibly hot, it's incredibly high leverage in terms of what you need to attack, the only thing you need to attack to make things go wrong. Well, if we're going to give up on centralized control, what are we going to do? Here's again our CPU and memory, it's being clocked at some clock speed. This was say, you know, 1940, this is say, you know, 2000. The entire history of scaling has been to make the path between the CPU and the memory wider, more bits going back and forth at a time, make the clock go up and down faster, and make the memory bigger. And that's how we've scaled the history of computing. And that's what's running out. We can't make these clocks go any faster without having the thing melt down or keep it stored under liquid nitrogen. So instead, we've gone to this kind of thing with these multicore to make you think you're getting something better. When in fact, the secret truth is like, you know, it looks like you've got three cores that you could run this super fast. But in fact, if you ran all three cores that fast, it would melt down. So the CPU secretly throttles it. You can really only run one or two of them for more than a little fraction of a second before it shuts down. What we need to transition to is not starting from a single cell creature, and then ending up with a really big single cell, which is what we have done so far. We have to make the transition to multicellularity, where we now let this thing actually get smaller. The CPU gets slower. The memory gets smaller. But now we scale by sticking zillions and zillions and zillions of them together. And this is the distributed control. We have to figure out a way to do this without saying, here is the privileged head node. And everything goes through here. And these are all slaves. And if we do that, then of course, we just have to attack the head node. And in fact, we still have centralized control. It's centralized in the head node. We have to get past that and figure out how to do decision making anyway. And this is an important point. Yeah, exactly. So in order to do a non-trivial computation, at some point, we're going to make a nonlinear transformation. And in order to do that to a first order, we're making a decision. We're choosing to go left or go right. And that needs to be a single decision so that our body doesn't tear itself in half and go in partly left and partly right. So the fact that I am saying we need decentralized control does not mean we are not making singular decisions. So consensus, some kind of collective action to say, well, I think we should go left or whatever. But we no longer want it to be the case that the locus of that decision is always in this one little CPU thing. Now the locus of control might be, I think Luis is the guy to pick. What should we go for dinner? And we do that. And there's this big nonlinearity that occurs because he likes weird food, whatever it happens to be. Okay. So the point is, is we're taking what had been fixed in space by hardware, CPUs where all the decisions happen. And we're now lifting it up and making it a software problem that now we have to program consensus, voting, leader elections, quora detection, stuff like that to make decisions. Programmable gate arrays, I mentioned this, this here is similar, right? Because we have lots of hardware that's all connected together. This could be a programmable gate array. But in fact, the way a programmable gate arrays are used now is what goes on, the contents of what determines what happens in each of these things gets initialized at power on time. There's, you know, so the circuit wakes up, it says, oh, it's brand new, it's empty. Get the bit stream that determines what kind of hardware this is going to be. Download that once and then you're done. This is a far more flexible thing. This little area could be acting like memory now and a half a second from now it'll in an election and suddenly start acting like processor. FPGA's by and large can't do that. In some weird cases they have some hot reconfiguration but it's very limited. Sure. The internet is about as close an example as most people are familiar with. The internet has fixed with addressing like that. So that means it's going to run out and in fact it kind of already has for the smaller IPv4 addresses like that. So it's not actually indefinitely scalable. If we added more and more and more and more internet eventually we'd run out of internet. But the other case, well it's literally true, like that there's merely 10 to the 38th addresses available even in the bigger wider internet that we're trying to transition to. That seems like a lot but that's not the point. The point is we have to have a centralized naming service, we have to guarantee everything is unique and so forth. Alright. Let me skip this one because we've taken a whole lot of time. I've mentioned indefinite scalability a couple of times. This is the idea that as part of the hardware software renegotiation it has two key elements. Number one is that hardware becomes best effort, software also becomes best effort. Number two, hardware must be indefinitely scalable. However you design your hardware it must be possible to add more and more and more and more of it and build an arbitrarily big computer if I have the real estate, the power, the cooling and the money. I can build a computer from here to Jupiter if I can find the materials without ever running into the IPv4 address limit or the synchronous clocking limit or whatever. Okay. And by adopting this thing which again is another hard choice because if you look at how electrical circuits are designed they are constantly exploiting the finiteness of the circuit. They're throwing errors out the edges. It works fast inside and communication off-chip incredibly slow. All of this stuff. By adopting an indefinitely scalable approach all of those cheats are forced to the front. The fact that it's so slow to move off the chip that's going to be your bottleneck in an indefinitely scalable evaluation of hardware instead of the speed that happens inside. Alright we're totally out of time. Let's run on. So the final thing is this idea of synchronous clocking that the computer has a single clock that everybody dances to. The drummer goes and everybody goes and that makes things a lot simpler avoids all kinds of trouble and it's incredibly expensive. In modern VLSI designs we are now paying 40% of the total power budget just to push the clock out to everybody on the chip. Almost half of the power not doing any computing at all. It's just saying here here here here like that. You see we're getting to the edge of what makes sense. The hard assumption that we make is we're going to ditch synchronous clocking. We're going to build a grid that's going to have events happening all over the place and those events are going to occur asynchronously. They're going to occur when they occur and that's it. And again it's the hard assumption to make but in exchange if you can actually write software that does something under non-determinism, indefinitely scalability, open-ended systems and asynchronous event delivery that software is going to be tough. Here's the machine that's going to do it. This is our 2008 round of hardware. It's actually a tiny little board. There's like eight of them there that each of these the things that's diagonal there. It's like a 2007 cell phone that we got on our own board and put it in so it could talk to its four neighbors simultaneously and we like we're hoping this year to maybe do another generation of hardware using lessons learned. But the idea is each of these tiles of hardware has a little patch of cellular automata grid on it and if an event happens in the middle of it then it does it. If an event happens on the edge then when it happens they'll communicate with each other to say what the changes were. So the tiles will try to hide the fact that the grid is partially sitting on corners and edges and so forth but it only tries very hard because it doesn't have to. It doesn't guarantee every site will get the same number of events. It doesn't even guarantee the different sites will see the same rate of events. So let's let's look at a couple of demos before we completely run out of time. Alright this is the simulator for the movable fuse machine that we're looking at. These things here this is simulating two tiles that are connected to each other and we can put stuff in here. Well let me let me do the little the demo that I do these days so this is me I drew a box. Alright and that's great. It has an inside and outside. The box is made out of an element called wall. There it is. Now I could make a cooler thing here by making a machine that builds a box and it does a much better job than I do. It's cheaper it's faster and it's great. Now if I simulate the passage of time using my eraser tool you know these things all gradually get chewed up and that's the nature of the business. The suggestion is is that where we want to go this is where living software comes in is something like this. Oops. Oh let's get the pencil back. Alright it's pink it must be alive and it's acting differently. Let's make the eraser bigger so we can see what's happening. So this is not just a piece of wood shaped like a box this is a living thing that knows where it is in the box and it knows what its neighbors should be. Whoops I killed it like that. It took a pretty big gun though and I can knock more of these out and so forth. Okay this I suggest to you is an example of living software. It's not just a pattern of bits that were laid down once it's a series of constraints live constraints that are being checked and checked and rechecked all the time. Whenever one of those asynchronous events happens to land there the thing will come and check. Is everything the way it looks like? Maybe it is, maybe it isn't. Whoops. It checks against its built-in notion of what it's supposed to be. Let's take a look at one of these atoms. We zoom in on this thing. Alright this is a box element. Whoops. Oh that's interesting. It's a box element inside the box element. It's got a data member called a line and inside the line it's got a data member called a position which is where it thinks it is on the line. If we pick adjacent guys 11, 12, 13, 14, 15 and so on like that. Each piece knows where it's supposed to be on the line and just mess that up. And this is the code that actually implements the box that we just saw. Without going through the details of it at all it's written in a language called ulam which we have developed ourselves and built a compiler for. Elena has done the primary development on the compiler. It compiles into C++ which then compiles into code which at the moment goes to the simulator but could and hopefully soon will actually go to new tile hardware. And the idea is each element of the box has a little line thing that keeps track of where it's supposed to be and the line, that's too complicated to go through. I'm sorry. Well but the important point is this code right here. The line says am I the minimum side of the line? No I'm not the minimum. Is the guy next to me empty? Well then go ahead make a copy of myself with the position being one less than mine. On the other hand if I'm not at the max and the spot in the greater direction is empty make a copy one greater than me. So this is reproduction. This code is part of the DNA so it's in every cell. That's right. And this is considered to be part of the physics of the world. We are gods when we're programming this thing so the goal is to come up with a periodic table of these elements that allow us to do useful things that we can burn into tile after tile after tile and then let computations run on top of it. Correctable. These guys. Yes. And in fact these guys work as hard as possible not to be adaptable. And that works great when we just hit it with something like a racing stuff. But if we were due to something nastier like hit it with x-rays that just flips random bits now we start getting things going crazy like that. Because we're flipping bits inside that can make it think it's in the wrong position or it's turning in the wrong direction. And things can get worse and worse and worse. Yes. We worked on that. If you look in the simulator as I come in here get rid of this thing as I come in with the x-ray tool you see those little yellow triangles in there. Those little yellow triangles represent one of two things. Either we went to fetch the atom and try to find the code that went for that atom type and there was no code for that atom type in which case we just erase it. Or while we were executing the code it failed. It violated some built-in assumption in which case once again we just erase it. So the code might not work if we flip random bits in the thing. And it might cause a moral equivalent of a segfault or whatever it is. But the simulator tries as hard as it can to catch that and there's a predefined rule you get erased. And it's up to your neighbors the rest of the system that you're part of to do something about that. There's determinism in the small from the time that we call the behave function here. The behave function is the one that's called automatically when it's time for this guy to have an event. And it's up to the hardware and the operating system to provide best effort determinism for the length of one event and that's it. After that event is finished there's no guarantee we're going to come back any time soon and give you another one anything like that. Let me just put the very last one up because it's kind of cool and it's brand new and then we can stop because this is going really long. So one of the things that you need to do if you're actually going to build non-trivial stuff is be able to build large objects that do things. So here's an object that grows and then moves. And the way it moves is by creating these swap lines that just the swap line refuses to get too far ahead of itself but whenever it's nearly lined up it just flips itself with whatever it is. And the net result is whatever it is moves one square in the other direction. The idea of how to come up with large object motion in cellular automata things goes back at least to von Neumann and Ulam in their original constructor arguments designs where they did it mostly with the constructor arm copying thing. And it's been worked on by several people since including Michael Arby who did a sort of theoretical treatment of it but to my knowledge this is actually a little bit new. So not only do we have a thing moving but we can actually like, let's pause this a little bit and we can put stuff inside this. Maybe a little bit of wall what the heck. And it all gets moved as well because the swap line automatically spreads to keep track of the contours it's going over and swaps everything that's inside it like that. So this is brand new last week. The next step say again sure yeah yeah yeah well this guy's not so much fun because he's actually reached the edge of the universe and he's smart enough to stop but let's make another one. All right so the first thing is we'll do some not too nasty stuff like we'll just you know blow holes in it and because this wall constructed itself from a single seed it actually knows how to restore itself pretty well. The swap lines on the other hand if we kill some of them they don't know how to grow themselves necessarily and so sometimes you get some tearing that's what we're seeing here. So this is basically you know a transporter accident you know evil spock that sort of thing. If we bring in the x-rays it gets much worse because now we can start faking out guys that they're not where they thought they were and we can get things to be very very bad. The design of the swap line and the underlines is called the diamond wall actually has a bunch of robustness features that try to recover from this but motion in the land of x-rays is a dangerous business and we just have to realize we're going to get lots of good plot there. I've talked way too long thank you so much for listening if there's any more questions I'd be happy.