 I'm Dave Ackley. I've spent most of my life in computing, starting with Fortran on punch cards in the 1980s. It was so easy to make a computer do new things. It was great fun and I've just followed my nose from there. In the 80s I did learning algorithms and function optimization. In the 90s it was artificial life and distributed social networks. I've kept one of my worlds going ever since. In the 00s it was biological approaches to computer security and more artificial life. And over the decades, in and out of academia, two thoughts stuck with me. First, that living systems and computer systems are really the same kind of systems. And second, that we've all kind of painted ourselves into a corner by focusing on software efficiency and correctness, while just assuming that hardware is a perfectly reliable world in a box. Because as deployed systems, computers are not perfectly reliable. And as computer security failures drive home every damn day, but we keep blaming them on anything and everything except the basic design of the machines. It drives me crazy. I'm embarrassed by my field. Now, I talk about that a lot, but today I want to go in a different direction. If we can imagine computing without perfect isolation and repeatability, without deterministic execution, then in the big picture we can take computing ideas, software, programming languages, application program interfaces, APIs, and apply them to understanding people and their interactions. People use language to inform and persuade each other, and I think we should say to program each other. And that's the theme of today's nutjob rant. We are coders. I made a button for it. Now to set some expectations up front. For starters, I've got kind of a lot of moving parts here, and it'll be amazing if they all work. Also sort of sniffling a little bit. And the talk kind of goes all over the place, back and forth between people and manufactured machines, but we're always aimed at implementable processes that can run, at least in principle, on either. We start with a meta-meta big picture view, but we'll have a lot of little specific drive-by examples and a couple little demos too, I promise. And I say there's good code in here that's really got potential. But by itself, this code is not going to make all your dreams come true, not even going to get you rich with a silicon unicorn anytime soon. It's ideas, code for you to judge. And if it makes some sense, maybe you'll want to support or explore it further. So now I'm here and you know the critic. Well, if all there is here is potential, well, it better be funny too. Well, you know, I'll try. We'll see. And one more critic-y thing up front. If I'm doing this right, at some point, you'll probably think, oh, please, that's so obvious. It's just rhetoric, philosophy, delkarnagy, psychology, philandiology, and glib and poorly executed as well. And on the one hand, you know, I'm like, I know, right? When doing science, we're always looking for the unexpected fact. But here we're doing something more like software engineering or API design. And we're looking for the broadest applicability with the least astonishment for the coders using the API. So the goal here is not to be specifically surprising, but universally obvious. Let's see how it goes. One of the great human activities is trying to explain stuff, to paint the big picture. People say it's all about money or it's all just physics. And the variable, the X, can be like anything. Just physics, just business, just luck. All about faith or love or connections. About guts and honor. Usually it's some kind of fancy, jargonic, incantation version of that stuff. And it's always a huge exaggeration. Money really doesn't explain why the sun shines. Physics really doesn't explain why my name is Dave. It's all just X. But not all stuff is X. And some is not just X. So it's all just X. Strictly speaking is four words and two lies. Well done. Of course, the talker says, no, no, it's not lies, let me explain. But you fall for that, then you're stuck there listening as the shining answer to everything melts down into this big mess of exceptions and redefinitions and fine print until all you really know is that sooner or later somebody's going to try to get you to do something. And maybe you do it, maybe you don't. And maybe it's worth it. Maybe it's not. If you do it, you buy it, you own it. It's your call. Which brings us to this. Right now it's we are coders. It's all about code, about programming. That's this X. So officially this is the second of six lectures in the Hyperspace Academy Introduction to Classical Hyperspace course. But the first lecture was seven years ago. So I really need to pick up the pace if we're going to get through this. We'll see. So we are coders, meaning we're all coders. This big picture we're painting with a nerd palette. Code, debug, deploy. But it applies to all of us whenever we speak or communicate. We point directions, we ask for the salt, send a tweet, pass a law. It's all code and we're all coders when we do it. Now the purpose of deploying code on any machine always is to get it to do some work that we want done. We're all coders and we're always on the lookout for machines that could do us profitable, reliable work. Which sounds like it's all about money, but money is just one example. Deploying profitable code can be as simple as look out. Hey, thanks. And here we're using work in a completely mechanical physical sense. Moving salt molecules toward us is work. Tapping a screen is work. Stopping at a light is work. And now Critic says, but that's ridiculous. The real purpose of a code deployment, as you call it, depends on its meaning and our intent and our consciousness and free will and bup, bup, bup, sure, yeah. Always tying code output to some act of physical work. However insignificant is a fantastic oversimplification, but it puts all code on an equal footing. It's a universal solvent for evaluating and comparing code. And that's potentially big. And yeah, saying it's all about profitable, reliable work for us sounds totally selfish and shameful, but we're all experts at running that computation in our mental fortress of solitude. We just don't usually talk about it because we don't see what that would do any useful work for us. So a basic goal of a Hyperspace Academy curriculum is to make this view so thunderingly obvious that it's undeniable to communicate more effectively, to corner hypocrisy, to make things better. Not just for society as a whole, but for us individually too. Unless we're grifters. Now it's clear that code is covering a lot of our canvas here, and we could imagine that real coders might object to the dilution of their jargonic incantations. And we'll touch on some issues like that at the end. But Hyperspace Academy rules, first we go for it. And our starting point is it's up to the coder to make the code work. Now that might seem obvious, but traditionally coder claims they're done when the code is correct. When it does what the client specifically asked for, kind of deal with the devil style, whether deploying the code actually achieves the client's larger goals or not. And that has to be partly right. Coder can't do actual magic. And in the end, clients said whatever they said. But there's usually ways a coder could have done a better job and didn't, but they'll fall back on mere correctness as needed to paper over any bad spots. And typically these days, it's hard to find any clean line between coding and deployment. It's a blur of design, debug, deploy, design. I mean, come on, the whole point. The added value of software is that it can be changed easily. With a button click, a flash of insight, a change of heart. And clients usually can't know exactly what they want. They need to try and see what works. Design, debug, deploy, design. Now for sure, there are exciting moments of great code amplification. When a blockbuster movie or a song or a game comes out when a slogan or a meme goes viral, but we are coders and we say every running of code is a deployment, even the quickest little go while we're debugging. And we say code amplification. Distribution and reproduction is part of code execution. So correct comes way later. We begin with real coders ship code. Which means that shipping decisions and even machine maintenance are partly coder responsibilities. When all your automation fails and it will, you personally have to answer the beeper for your machines or it's on you for not caring. And that's a drag but it's fundamental. You can wish or you can whine or you can be a total jerk but it's up to the coder to make the code work. Now, critics like, you know, well, so what? Truck drivers or oil painters could say the same. We are painters, we ship art. Truckers ship everything. And then we're like, yeah, sure, sure, big picture. But we are coders, has some neat features too. For example, I can prove to you kinda we are coders because what is we? Well, for starters, it's you and me. And we're pretty sure of two things you and I. First, I'm a coder shipping code. This code right now. And second, you're interpreting this code somehow. Also right now, which you can do because you're a coder too. Q-E-D, kinda. And even though those two nows, mine and yours, are nowhere near each other in space and time, they're both absolutely legitimate because code can be handled two ways, as data I'm shipping or as program you're interpreting. But the code itself is the same either way. So we're really better off centering the view on the code. Yes, at one end there's a coder at the other a machine, but those roles are fluid. The code's the fixed point. The most real thing about this interaction we're having here. Us coders. Can you feel it? Now, critics like, yeah, whatever. But about that machine thing. Machines are so limited and rigid and bloodless, not lively at all, not like real life, not like me. And we're like, yeah, but that's because simple machines are limited. And even programmable computers traditionally were based on deterministic execution, that perfect repeatability that doesn't actually exist for real-world systems. So we have to accept that machines can inevitably do something bad, but we can also design them to surprise us with something good, which we'll talk about in a minute. And when we do that, we're heading for a world where machines may not just solve problems but set them and have initiative, creativity, passion, all of it. Which is good, because we're trying to run code on each other, which means we're not always the coder. Sometimes we're the machine. So an obvious question is, what kind of machines are we? Or backing up, where should we even look to figure out what kind of machines we are? These days, it's trendy to ask neuroscientists why we are the way we are. My neurons made me do it. Or my whole brain. Or it's about evolution. And answers like that have germs of truth, but they also really kind of miss the point. When asking how a programmable machine will behave, you don't focus on its silicon chips and its voltage rails. You look at its software. For sure, neurons and brains have their little tricks and their special quirks, and we need to understand them because they're part of our hardware. But we already know they're hugely adaptable and modifiable. They're programmable. By external code, we can transmit if we can figure out a way to express it in a form the machine will interpret. What we coders care about is what can the machine do? What kind of actions can it perform? And what's the dictionary that will allow us to create code that will trigger and orchestrate those actions towards reliable profit for us? So instead of looking at the brain or physics or history, we're better off looking at the code people run. That could mean look at structural coding details or turns of phrase or images or symbols that might evoke particular responses. What coders want to understand are the virtual machines, the programming language interpreters, that people can run if a piece of code appeals to them. But where do the dictionaries and the virtual machines come from? Can we fork an existing code base and modify it for our own purposes? Well, if we just need a secret code for our club, that's no problem. But it's harder if we want our code to run on more than a treehouse of machines. After all, most available machines are already doing all sorts of profitable, reliable work. And if running more of our code means running less of some other code, sooner or later there'll be coders pushing back at us, like in academia. It's packed with people squabbling over disciplinary boundaries and proper meaning of jargon. A fundamental part of making code work is dictionary maintenance, which, yeah, sometimes feels like, but hey, you look at mass media and much of what you're seeing is dictionary and machine maintenance. Incessant advertising, competitive reframing, gatekeeping, all of it. So achieving wide distribution of significant new code is hard. But however much the entrenched coders hope we forget it, software is soft. So as long as there's smoldering embers out there somewhere, a code wildfire could break out if conditions became right. And if it does, the deployed code base can shift like that. And coders that had been raking it in are suddenly all like Harvey Weinstein going, what the hell happened? You know, a lot of stuff does suck on this planet, but I am encouraged by kids these days, not just kids, calling out the jerks, mocking the bullshit, memeing the angles. They're cleaning up the code base, and that gets big respect from me. All right, so there's lots of ways to view the basic pieces of a programmable machine. One take is this nerd-famous title by Nicholas Wirt, Algorithms plus data structures equals programs. An algorithm is steps to perform over time. A data structure is information laid out in space. Put them together, you're computing. Ubernerd Don Knuth views programming as an art, like writing cooking recipes. Knuth does a lot of math, but it's hard not to like a book that opens by comparing coding to poetry and music. And I'm okay with these approaches as far as they go, but after five decades in computing, to me they all feel kind of wrong for understanding people machines and future manufactured machines. It seems to me there's four key processes. Input and output are obviously essential, although they're both completely marginalized by the traditional perspectives. Sequence is the process of orchestrating internal changes over time, so that we can generate different states at different moments in the same place. It's very powerful. And judge is the process of evaluating the desirability of some internal state of affairs. And that's really the missing elephant in the traditional deterministic computing. Input, sequence, judge, output. They don't have to happen in that order. The real power of our machine is that the programming can, to some degree, configure all four pieces to control each other in various ways. Like we can implement a search process using sequence to judge, to generate possible states and judge to evaluate their quality, as covered in a pre-rec video for this lecture. Of course, people write search algorithms in deterministic machines for deterministic machines too, but when everything is 100% controlled from the top, there's no need for any on-the-spot decision-making. And in fact, deterministic computer coders end up using pseudo-random numbers to make their deterministic code less predictable, as discussed in another pre-rec video. But judge really starts to shine when we finally accept that determinism fails and we're really not the masters of the universe. When what actually happens is not necessarily what was planned, then what's being judged is not pseudo at all. It's the real world. So organizing code around searching is hugely important because that's what opens up space for our machines to create knowledge, display creativity, and surprise us with their results. Now, critics all up with, well, I get we need input sequence output. That's just traditional computing. But judge isn't that important considering we can implement whatever judging we need ourselves. Like you just said, search. We use universal computers, you know. Here, let me Google Turing completeness for you. Yeah, thanks. Very thoughtful. Except for machines in the real world, beyond the cozy prison walls of determinism, we really can't implement all the needed judging. Stuff is always happening in the moment on the spot that each machine has to take care of itself. We deploy code on the judge process because it's there and it's powerful, but it isn't solely or even mostly there for our grubby little search problems. It's really more about dodging things, sizing up strangers, and picking up 20s on the street. More generally, search is just one kind of behavior we can implement. There's lots of other dynamics we could create by cross-connecting the four processes in different ways. It's a plenty powerful machine. And to help with dictionary maintenance, I wanted to boil it down to the simplest, most memorable thing I could. I started playing with arrows. Input arrow, output arrow, up and down for sequence, a loop for judge and a loop for sequence. And eventually I packed them all together and I made this symbol. Whoops. There we go. Which I call the self-image. What do you think? It's a cartoon icon, a reminder of the key processes of the machine we're coding for. Oh, I've got one here. So here's one that I printed up on my little printer. Input, output, sequence, judge good, judge bad. I've been living with it a few months and I like it. It's fun to try to map real-world stuff onto the basic self-image processes. I even made an animated one using some colored LEDs I had lying around to help suggest the range of behaviors we could implement on the machine. I made a switch panel to control it and I've got it set up here with the self-image in a dark box to help keep it from getting washed out by all these lights. Those LEDs are bright, but I've got them turned way down so that I can run the whole thing on just USB power. Now, I'm not a very accomplished self-images and the switches I bought are way too stiff. But just to get the idea, let's try playing a few behavior etudes on the self-image. All right, so here we are down here now. Now, the first thing that we can see is it's doing stuff, it's dark for the moment, but it's doing stuff even though we're not asking for anything and that's representing the idea that the machine has its own business. It's not 100% sitting there waiting for our business. We have to convince it to do what we want. All right, so what's the most simple thing I can think of is something that just does a repetitive output, doesn't depend on any input, doesn't have any internal sequences, so it's something like a lawn sprinkler. Anybody has any lawns anymore? One step up from that could be something like stimulus response. Here's an input, makes an output. Here's an input, makes an output. We could step up to traditional computing, take some input and do nothing else, then do some sequencing. We'd probably do that really fast because we're a computer and do nothing else. And when we're done, provide output and do nothing else. There's the observer looking and thinking, the kid in the corner doesn't say much, watching everything, or on the other hand, there's the motor mouth. Can't stop them, they just go and go and go. Now we can get the judgment mechanisms in here. We could do like a search process. So there'd be like some kind of input telling us what we're looking for, what kind of threshold of quality we want and what we have is not good enough. So we use the sequence to generate new variations and we evaluate them. They're bad, they're bad, they're bad. If we're lucky, eventually we find one that's good and we can send the output. We've solved the problem. Or like the critic, a little bit of input, maybe a little bit of thinking and then hate it and output, output, output. Now one thing about it is, I called this stimulus response, but these things are all completely ambiguous. There's many things that could look like this. For example, when you're in a flow state and everything's just working out great, it could be, so you're taking input, you're getting the correct answer and everything's wonderful. It's like Michael Jordan in his best days. And maybe every once in a while you sort of have just a little thought just kind of pokes through and then goes away, that kind of thing. So what else? We've got the daydreamer. Not paying attention to any input, not actually doing anything, just thinking, thinking, thinking and loving it, loving it, loving it, like that. Or similarly, the depressed person, not doing anything, just thinking, thinking it's all terrible, input happens, it just makes stuff worse. So you get the idea. Let's do one more. This is a long one. I don't know whether I can really get. Let's do advertising. So it's like, don't you hate it when you see this, but if you do this, you'll see this and you'll be happy. If you do this, you'll see this and you'll be happy like that. Now, actually, the Self Image has some features that we haven't really used before. And the main one is that this flowing stuff here like this, this is when we're paying attention to the input, the flowing and the sequencing, we're doing it. But in the advertising, that key step, if you do this, that's a program that's trying to tell you to do a new thing. That's code aiming at the do. So we can say it by, if we do the other direction here, it would be, you know, don't you hate it when you see this, but if you do this, can we see that? Like that. The whole bar blinks, meaning we're programming that. Then you'll see this and you'll be happy. And now the do is different. You can reprogram it, it's flashing, and now it's a new kind of do. And we can do the same thing. We can reprogram, you know, say, well, it doesn't go the way you think it goes. You know, this is what actually is going to come next. And it's like, oh, I see. Or, you know, you're interpreting the world. But then it's like, oh, energy and matter are actually equivalent. And it's like, oh, that's new code. And now one sees the world differently. You can even reinterpret what good and bad means, which is really kind of weird. And it's like, you know, okay, now this is what bad is, and this is what good is. And, you know, so we change our fundamental view of how to evaluate stuff that's good and stuff that's bad. That's very weird. Now, unfortunately, if we do the Vulcan death grip on the controller here, did I get it? Yeah, I got it. So we can restore it back. It's harder on real people machines. Okay, that's about it. So, you know, I've got the 3D files and stuff for this. They're all a big mess because I was in a rush like I always am. But I'm happy for good spirited folks if they want to make their own self symbols animated or not, that would be great. Okay, so that was our demo. And, right. So the self image focuses on the machine's processes. And I think that's the right place to start, especially as affirmative action for input and output. But notice how there's nothing in the center. Where does the input go to? What does the sequence read and then modify? Now, we'd like to say the answer is it depends. It could be neurons in there or silicon or biocomputing goo, but we can't just stop there. Like the nerds at the Internet Engineering Task Force, the gold standard is running code. So if asked, at the very least, we have to be able to sketch a plausible implementation of our code or GTFO. And that's where we re-enter hyperspace. In lecture one of this series, we started out by making low dimensional graphs and then we struggled trying to make higher dimensional graphs and finally we switched to predicates, which are amount to a set of yes or no questions that whatever we like as long as we choose to implement them. We arranged the resulting zeros and ones in a bit vector as a point in some high dimensional space, which we called a hyperspace fix, and that's where we left off. Now, in the hill climbing video that's mentioned earlier, we also ended up with a bit vector, which we searched using various hill climbing algorithms, and that's our plan here too. So in the middle of our running self-image, we have some implementation of a hyperspace, which could be something exotic like a neural net, but to keep it simple here, we'll just say it's a chunk of RAM, some conventional memory. And now we ask for a leap of faith. We claim that we can implement suitable predicates for all the machine processes we care about, sensory and kinesthetic inputs, energy levels and joint motor angles and forces and so on, plus lots and lots of soft predicates that we can configure by code. And in particular, we postulate a bunch of soft circuitry between the self-image processes that we can use to configure different conditions and trigger different operations. And to say all that, this is kind of like a coder parlor game. When we're presenting our proposed implementation, anybody can call technical debt about anything we postulate. And if they do, then we're obligated to stop and unpack that part of our story into some simpler physical processes. And if we can't, then we GTFO. Yeah, yeah. And so then if there's rough consensus that the implementation was plausibly refined as the challenger demanded, then we can, if we wish, create the challenger to explain what part of our refined implementation they didn't already see or they GTFO. Yeah, that's a great game. Implementation or GTFO. Anyway, you get the idea. So here, the virtual machine we implement, we call hyperspace search and sequence limited. So let's go play in hyperspace. All right. So if I can get this right here. All right, there it is. Now, if I did get it right, that's the same bit vector we ended lecture one on. And it's Big Jim. And how do we know that? Well, we can check any of the... So it's not... So the first predicate was is it's hot and Big Jim is not that hot. Does Dave eat a lot of them? Yes, he does. Red is not the best color for a Big Jim. It's not expensive and so on and so forth. Yes, they're big size. And yes, they're good on burgers and so forth. So that's the idea. You have zillions of predicates. Here we have 20. And you get answers for all of them. And that is the representation. Now, I mean, I can change these things. Good on a burger. I can say no, it's not good on a burger. But then as far as my little demonstration is going here, it's not a Big Jim anymore. And... But okay, so we can reset it back. And now here's the point. That's just a hyperspace fix. One point. There's really not much to do with it. But if we start getting other ones up here, like if we get Habanero up here, which you can see, it's similar to Big Jim, but it's different here because it is quite spicy. It's different here because red is the best color. Well, if it's a dried one anyway. And so forth. And they differ here. Right. Habanero is not very big and so on. Like that. So now the idea is when you have two things, when someone says Big Jim is the best, someone else says Habaneros are the best, whatever it is. Or someone says, I'm thinking about Habaneros. You should think about a Big Jim. You can generate the hyperspace spanned by them. Which just means wherever they agree on 1 or 0, then the hyperspace is fixed at 1 or 0 on that dimension. And wherever they disagree, you get a wild card. So here it is. They disagree that Dave eats a lot of them. But they disagree on whether they're spicy or not. They disagree on the best color. They disagree about whether they're good on burgers. I don't know. Maybe I think Habaneros might be a little bit intense for a burger. And they disagree on what's this down here. On having a skinny shape. So wherever they differ, we get a wild card. And the volume of the hyperspace is exponential in the number of stars. The number of stars in the hyperspace doubles the volume every time you add another star. So here we've got 5, 1, 2, 3, 4, 5 stars in the hypersubspace. Which means the hypersubspace volume is 32. 2 to the 5 is 32. And that's the idea. And the cool thing is, right, we can take, we could imagine that one of the vectors in a hypersubspace generator is like what we're thinking. It's where we're at. And the other vector in a hypersubspace generator could be code that we receive. And we somehow have to interpret that code and that's part of our leap of faith. But once we've done that and we get it down to these two, we can form the hypersubspace between them and boom, now we have a place to search. We don't want to bother checking things that, what is this, that aren't good on pizza. Because everybody agrees being good on pizza is good. We don't want to talk about things that aren't good building materials because everybody agrees, bad building, we're not interested in good building materials and so on. So the thing is, the more similar the two generating vectors are, the smaller the hypersubspace volume. Or conversely, the more different they are, the bigger it is. So for example, if we have a big gym and someone wants to talk to us about drywall, well, they differ in a lot of ways. They disagree about whether Dave eats a lot of them. They disagree about, I don't know, whether they're good building materials. Yeah, sure, whether they're bigger than a bread box, whether they're made out of minerals and so on. And in this case, we've only got 8,000 different possibilities, which doesn't sound like that much, but we've only got 20 bits here. So if we come up with the perfect anti-drywall concept, oops, when did that one go? There we go. Which one am I missing? There. We've got drywall and anti-drywall, whatever that actually is. And the hypersubspace volume spanned by those two vectors is a million sites. The hypersubspace is full of stars. So that's the thing that they'd be thinking about. When we're talking about people telling us stuff and we're doing stuff or we're just thinking, a fundamental question always is, how big is your hypersubspace? This whole thing, these 20 dimensions, might be a hypersubspace of our million-billion dimensional bigger hyperspace that's really in the center of the self-image. And as you can see, the hypersubspace can be formed by just generating between any two-bit vectors. And as we search or add more bit vectors or stuff, it can collapse. This one, we exploded it. So now we have absolutely no constraint at all. If we ask, what is in the subspace between drywall and anti-drywall, the answer is everything. It really doesn't help. But between Big Jim and drywall, there's many more between Big Jim and Habanero. We could plausibly want to consider any combination of 1s and 0s for the stars. And it might be something relevant. It might be an actual plausible kind of chili pepper. That's the subspace idea. Okay. So fixes to subspaces. Oh, yeah, hill climbing. Right. So now that we have a subspace, what are we going to do with it? We were saying the idea was that the self-image has the ability to be programmed to do search algorithms. Hypersubspace search is one of its entitled things. So let's, I've got another demo here. Yeah. All right. So now I think I've got the same vector here. The Big Jim 01, 01, 111, 01, and so forth. We can flip bits here as well. Same as we did before. Now you'd think if there was really going to be zillions and billions of bits, that putting them all in one big long straight line is really not the best way to do it. But the reason we want to do it here is so that we can use the other dimension to represent quality, judgment, evaluation. So that's what we're going to do. We've got a ruler. So now the height that the whole thing is at is an indication of how good it is. Now we got to watch out. I'm not sure why I did this, but the numbers in the evaluation increase going down. So if we're looking for high numbers, which in this case we are, it's actually going to be moving down when we do it. Now in this case, when we actually flip bits here, nothing actually happens because we don't have a function to try to optimize yet, but we can get one like that. Okay. And this is bitmatch. That's just says for each of the 20 bits, there's a secret target bit, which is the preferred value of it. And if you get the preferred value, you get more points than if you don't. So, you know, the first bit, if I flip the first bit, ah, it made it better. I flip the second bit, it makes it worse. I'll flip it back and so forth. Now, in this case, I happen to know that the best state is all ones. So if I go there, all right, then we achieve a score of 20 because we got one point for each bit that we matched. But we can randomize the target, ah, something like this, and now it's not all ones that we're looking for. So yeah, turning that on made it worse. Turning this on made it better, better, better, better, better, better, worse, and so forth. Now, what we want to do is start automating this. Clicking it gets old quick enough. The first thing we can do is say, which ones were worth picking? Do we want to go down? So we can do this, and now what we get is a picture of the lay of the landscape around a given hyperspace fix. What we're seeing here is, you know, if we just flipped this first bit, the score would go up to 12. If we did the same for the next one, it would go up to 12. If we did these, it would all go up to 12. If we did this one, it would go down a little bit. So that's good. So we could flip that, and so forth. And so here, now we can just pick out the downhill ones right away. Whoops. No, I got that. And there we are. Do we get it? I think we got it. Right. So this bitmatch is so simple. Now, we've got a related one, weighted bits, which is just the same as bitmatch, but instead of scoring one point for getting it right, there's some particular little weight that might be, you know, not one or zero by a half or whatever. So we get a little bit more structure around, well, it's going way up there, around the landscape. Stuff goes down, stuff goes up. This thing hardly changes it at all. It changes a little teeny bit, and so forth. Now, but we want to optimize this. We don't want to do it ourselves. So we've got a bunch of hill climbing algorithms that we can play with. The first one, next to scent hill climbing, it means just go through the bits in order and every time you find one that you can flip and get better, just take it and then move on. And here we go. And now we can see the whole landscape is going up because we have minimized the function, minimized the error, maximized function, however we want to think about it. We scramble it again, and next to scent just again, marches right on through. And it works fine. We can change the target. It'll do it. Everything's good. Now, what else do we have here? Oh yeah, let's go on to the next function. This one's called linear plus even parity, and this one's a lot tougher. And so it's basically the same as weighted bits. For each one, there's a target, and you get a score for matching the target. But there's an additional constraint that across the entire bit factor, you have to have an even number of one bits. And if you don't, if you have an odd number of one bits, you get a big penalty. And as a result, once you're in a place which is an even number of one bits, which this one ought to be, everything looks bad. Everything looks uphill. If we scramble it, look at that. We're way up at zero and there's nothing to do because everything looks uphill. Now let's turn next to scent off. So if I by hand turn this thing, I flip one of those bits that was an uphill move, and now we're way up at, you know, whatever that is, minus two or three or something. And now look at this. Now all of the moves look good, which seems nice, except not all of those moves are actually going to lead to the solution. Not actually going to lead to the best spot. You know, maybe if we pick the one that's most down, we'll be able to do better and so on. So parody is an example of a high order constraint. It depends on the values of all the bits in the system. And in a way, just the essence of computation, the essence of going from ideas and concepts and high level stuff, you know, ineffable beauty of free will and consciousness is taking high order concepts that depend on many things on lots of brain, lots of world, lots of people in lots of worlds, all of that stuff and figuring out ways to implement, meaning breaking it down to lower and lower order problems until eventually we can actually get to stuff that we can solve and put the pieces back together. So parody is essentially hard. But if we move on to, well, there's randomness and hill climbing, talk about that in a sec. But if we move on to stochastic hill climbing, stochastic hill climbing, we've talked about it in the prerec video. It's willing to go uphill sometimes. It prefers going downhill, but it'll sometimes go uphill. And, you know, it still has a tough time with linear plus even parody because it doesn't really want to go uphill. Let's speed it up. But if we give it enough chances, it'll pick one and it'll go uphill and then it has to get lucky and pick one of those downhill moves or actually the one that's on the path to the best solution. And let's see, let's reset this to a known place. I think that will work. Yeah, okay. Now I think we'll have it. We may be able to actually see it if stochastic hill climbing actually gets there. Okay, yep, yep, yeah. All right, there you did it. And then it blew it because that's what stochastic hill climbing does as discussed in the other video. Okay, we'll look at like just a couple more quick ones then we'll move on. So here's plateaus. The idea of plateaus is we take the 20 bits and we break them up into five groups of 4 bits each in order. Let's just let somebody search it. So random a sand hill climbing, we haven't seen it before. It's the same as next to sand hill climbing except instead of four bits in order. It's just, well, we'll do next to sand. And so now each group, so you see the first group of four altogether, the last two groups of four altogether, but we have two groups in the middle that we haven't gotten to be all ones and all zeroes. So each of those groups of four bits is a plateau and by getting them solved and the trick is until you get all four bits in a group correct, the score is zero for that plateau. So you get no signal about which direction to go and next to sand hill climbing as a result just keeps flipping pairs of bits and then flipping pairs of bits and making no progress. But if we take random a sand hill climbing where we might come around and break the order that we consider things, it will eventually figure out just by blind luck to how to make each plateau get all one type and actually solve it. And it just got the next to the last group, third group there, now it's only got one left if we speed it up a little bit, oops, I turned it off, if we speed it up a little bit there it is, random ascent hill climbing did the job. And so plateaus is an example of an intermediate order function. It's four bits, four bits, four bits, but then those groups of four bits contribute independently so you can solve each plateau without considering the other plateaus unlike parity where we had to look at the whole thing. Alright, we'll make it even worse actually to save a little time we're going to skip that, what the heck, we'll do it. So traps is like plateaus except each individual plateau instead of it being zero when you get the answer wrong, there's actually a uphill path away from the good solution. So where these things are going to tend to end up is with one bit on and the other three bits off in each group, that's what we've got that's the highest thing you can get unless you flip all of them off and then you get an even bigger score so this is what they call in genetic algorithms a deceptive function because the uphill direction actually points away from the goal and then all of a sudden there's a very steep cliff when you make that last correct move but you have to go downhill, downhill, downhill and then boom big uphill or the reverse in this case. And again stochastic hill climbing will make more progress here but again it's the same thing, it's getting fooled as well so it has to get a combination of lucky uphill move, lucky uphill move, lucky uphill move and only then can it actually find the downhill move. So it's got two groups now that it's managed to find against the traps, three, four groups, lost one and so forth. So the idea is the course of any given search and now we imagine this being much more complicated and much more subtle combinations of linear portions and higher order portions of a function that we have really no idea how it's all going to come together and we're going to have to there we go great job, let's stop it's too late, stop stochastic hill climbing before it loses it so one way that we can try to solve hard problems is by getting lots of machines to work on it and here's the trick well here's our last one, it's called needle in a haystack and this particular one is all of the million states except one score zero and that one in a million state the needle in the haystack scores I don't know 10 points or something like that and you know we can do anything we want here we can flip any pattern but literally we have to hit a one in a million shot in order to optimize this function and we're not going to do it we're simply not but if we had information from the outside world if somebody else had solved this I mean if we have billions of people and we're all trying to solve a needle in a haystack there's only a million possibilities of billion people will find it quite easily if they can communicate with each other and send code saying you know here look in this area where the light is good we can do much better so we have a similar example we can there if someone can tell us that look in this hypersubspace so now we've got slashes through some bits saying they're fixed we've collapsed the hyperspace down to a much smaller hypersubspace how do we know all the fixed bits should be black instead of some of them white because people figured out the accumulated knowledge of mankind and now whoops that does happen I don't really know why that happens it's a bug in my stuff there we go so we managed to optimize it and we randomize it again and it should be quite quick to actually finally figure it out because this is only a three dimensional subspace so it's not there it is now on the other hand if we have a hypersubspace that's wrong so now we've got a bit that's fixed one we're never going to get it at all we can spend we can spend all our time in our hypersubspace and fail and fail and fail and fail and this kind of reminds us of depressed person that they seem to be trapped in a subspace they work and work and work and everything looks terrible and we figure maybe what's going on that's because once again I've got a bug in my code that allows us to flip bits that aren't supposed to be flipped we'll make more of them there we go oh and look at that did it again alright let's let it go alright so that's what's in the center of the self image hypersubspace search and sequence processor and sequence is you know just when we're supposed to do something when we get the result because we're going to have to take this code and it's coming in and it's expressed in kind of linear form like language and we're going to have to parse it using an interpreter and all that stuff so we're going to be doing direct sequencing not just searching all the time alright so winding up I'm starting to lose my voice here too it's going pretty well actually so far obvious waste I mean you know we've talked to the critics I'm going on through there's nothing new here that's the easiest attack I can think of or it failed to cite Heidegger you name it and that's really true so aside from the anticipated objections that I gave to the critic going along there's plenty of other things to complain about and again nothing new here well kind of it's supposed to be because not specifically surprising but conversely obvious and you know writing science papers it's really all about the related work so I'm always full of shame when I don't have good related work but you know the truth is I really did sort of follow my nose for all of this kind of stuff and then only when I was actually writing a paper I would go back and find related work for all this stuff which is why it seems so creative and poorly expressed sometimes just so stories there's criticisms that descriptive language are so general that it applies to everything and therefore it tells you nothing about anything and that's always a risk but I think we have an edge with we are coders because again of implementation or GTFO just because we can just use computation to describe a tremendous amount of stuff it doesn't mean everything is well described as a machine as a self image and so forth it's not necessarily true the point is that computation is where you find it and that may take you know insight cleverness wit to figure out how you could put a bit of code together that would get that machine to do profitable reliable work for you so it's you know not this the magic just turn the crank and get it it requires additional application but it doesn't apply to everything so I think we escape the just so story criticism and again mainly because implementation or GTFO that the whole process is we can talk we can talk but it's all an IOU until and unless we can reduce it to implementation reduce it to lower order and finally your feeble skills are no match for the power of determinism that's the one that's actually the thing I'm most worried about that you know I spent a lot of time in deterministic execution and it's really addictive you get that you know it will do whatever you say it'll do it really fast and it's really fun and to step beyond that and say no actually you know let's let's do deploy something super simple and deploy a whole bunch of copies of it in fact let's fill the machine with as many copies of it as we can so that we'll have a whole army heading towards the answer to this simple problem like that and say wow okay we'll actually get it to work but we're just doing a simple problem the addiction the risk of deterministic computing is that it makes us feel like we should expect to be able to kind results that we get when we focus on correctness and efficiency only and leave ourselves wide open brittle as a piece of glass against dealing at anything unexpected alright so we're reaching the end coming to the end of this go we are coders it's about living computation and taking an expansive view of code where we abandon any notion of deterministic execution allowing us to repurpose computing concepts born and raised in an utterly centralized and micromanaged tyranny and apply them to fallible malleable gullible people and other living machineries we are the coders of our lives and of our shared reality and we each have to turn profit enough to stay alive through the next squeeze and we are machines long evolved to do that and we all fail in the end of course this is all just a starting point like truth for example has hardly come up our machines are material systems in the material world and much of their processing is direct result of their material form at the bottom it's like reflexes direct connections from input to output sense act above that there's systems for handling trouble with input filters to highlight risky stuff custom outputs for emergency response and sequence machinery always trying to predict all that hardware and processing I call direct reality and most of it is not programmable by the coder at least not directly it's not that it's fixed like a reflex but it programs itself or at least it tunes itself based only on the machine's own actual experiences and then above that grounding out in direct reality but separate from it there's the virtual machine processes the programming languages interpreters and the code sequence storage mechanisms that make the machine so flexible and powerful and built atop all of that starting with code like but leaping vastly beyond it there is consensus truth a huge interlocking code base a work in progress yes but already the absolute glory of humankind about how to see predict judge do most effectively Keith said truth is beauty but coder says until implement until a beauty implementation is unpacked so in the meantime we say truth is reliable practical code practical jokers go for quick laughs by violating the consensus truth API tricking machines into seeing incorrectly and then unprofitable work haha of course the pranksters commit credibility suicide in the process the API has some defenses against such things grifters do it too and it's a real problem but still at any moment there is a vast flux of sincere consensus truth code moving through our communication systems large and small which does raise an issue as a basic part of machine maintenance coder typically needs to assess how well their deployed population of machines is holding up they'd like to have some kind of message that each code deployment could send to signal that it's still running but what message if it's anything compatible with consensus truth 2 plus 2 equals 4 then how is coder to know that that's really from an active installation and not just a random moving fact so there's a standard trick coder selects some deliberate error and has the machine ship that this chosen error can be anything that's unlikely to be mistaken for consensus truth from non words to strange phrases to funny hats to bald face lies and counting traffic in any particular chosen error allows not just the coder but the whole population of installations to assess their own numbers one final thought we'll write better code and get more profitable reliable work out of our deployed people machines if we understand their common failure modes for starters people machines are powerful amplifiers both from their own input to their own output and also from their output to other inputs and amplifiers are touchy they can feedback overload burn out damage nearby stuff it's great when our machines are zooming along working steadily but we need to recognize when the machine's heading the wrong way or are stuck looping without progress now I'm old and I'm doing alright now but I remember a machine getting hung up in depressed person can be in profound pain and the hell of it is machines that are hung up on something stuck on a contradiction are actually the most open to running new code they're looking for it their existing stuff isn't working the consensus truth database is a magnificent achievement but all too often it can be dismissive and arrogant religions have desperation APIs just for this purpose cult leaders and grifters do too I think that the hyper subspace of consensus truth has plenty of room for empathetic AIs empathetic APIs to help stuck machines get moving we should develop empathetic APIs that are still tightly coupled to direct reality and consensus truth but offer place and purpose and positive meaning if it's all just physics I don't know exactly how to implement that but we are coders I know we should do it we should do it in the words of the master for every hung up person in the whole wide universe thanks for watching