 Computers keep changing the world, but their power and safety is limited by their rigid design. The T2TILE project works for bigger and safer computing using living systems principles. Follow our progress here on T Tuesday updates. Okay. Hey folks. If you're not watching the live stream, which is quite likely, I was already messing up second time out trying to go live. So this is already kind of confused and we'll see how it all comes out. All right. So that was a combined demo bumper for this week's update, this update. The useful valley. I'll talk about what I mean by that a little bit later. So what's been happening since the last update? The main thing that actually has taken up the time has been doing science stuff. Had two online meetings that I attended. The earlier one was the Santa Fe Institute workshop on the frontiers of evolutionary computation, which is like genetic algorithms and stuff that I used to do more than I do now. But it was nice because there was folks that I hadn't seen for a long time that I got to chat with a little bit and so forth. And my role, my official job at this meeting was to be a discussant. There were three speakers. And after each person spoke, they had questions and discussion already. But then there was a whole another session at the end of the three of them at the end of the first morning that I was supposed to lead the discussion. So I did my best with that. Here's a clip from the Zoom recording. The answer is embodiment. That's what I was suggesting. And some people sort of misunderstood that by embodiment I must mean robots, which in retrospect is a reasonable thing to think people might imagine. But that really isn't what I meant by it. What I meant by embodiment was being about systems. So taking a systems perspective means you need to consider where your computation is actually occurring in space, how long it takes, energy, and all that stuff. Even if it's inside a raspberry pi or your watch or your cardiac pacemaker or whatever it happens to be, doesn't seem like that's a robot, but it's embodied. So really it was about taking the systems perspective. And we had a pretty good discussion about it and so forth. So the quality was really not great. And that's my current worry about the whole quality issues about doing this live streaming stuff. Partly it's to do with my hardware, partly it's to do with the bandwidth and everything. We'll see. I'm not sure whether I'm going to commit to doing most of these T Tuesday updates going forward live or not. It really does reduce the total amount of post-production to zero basically. Okay, but that was the first meeting. The second meeting just last week, Thursday and Friday, the eighth workshop on biological distributed algorithms, also online. There I was giving an actual talk and so I screwed up the beginning of this live stream because I was trying to get the audio stuff set up and to get all the things that you actually want out of it. It's a little bit complicated and I didn't really get it right. But let me show you a few clips about how my talk began at the biological distributed algorithms workshop. Let's see if this works. Dave, you're muted. I was muted. And did you see that right there? I messed up the OBS layout. There we go. Okay. The process of getting unmuted. It was just totally stupid. I was muted. That's all. Not anymore. Okay. As well. So I managed to get going. But then looking at the next slide, I realized that I had messed up as the layout. Oh, dear. I can fix this. I can fix this. And so the point of this is that I was trying to redo the OBS layout on the fly while there are. And notice what's going on there. What the heck is that? But I eventually meant there. See, I didn't get it quite right. Gonna have to be close enough. All right. So the thought is, oh, what a disaster. After that, the talk didn't go too badly. But you know, that smear thing, that's the hack that I was using to do MaxScreen. The way I do MaxScreen is I take the two screens, the black background content and the me. And I put them side by side on one big virtual screen. And then I do a filter, which does a max between two different places on the same virtual scene. And that allowed me to get around the problem that I couldn't figure out how to get multiple sources working in OBS. Anyway, what a disaster. Again, the talk was all right. It wasn't the best outing after but wasn't the worst either. So that was the biological distributed algorithms workshop. And there were some good questions and so forth. I think it's been put up, I think it's put up publicly. So maybe I'll cut out the less embarrassing portion of what I did and put it up on the Dave Ackley channel or something. The other thing I wanted to talk about for a few minutes here. I'm going to try to keep this whole thing short. I did some more work on size, stochastic iteration, which I showed last week as a use of the diffuse plates. Again, it's a neural network model that has a bunch of voters and a bunch of elected positions and the voters say who they want to elect and then they react to who actually got elected and the function produces a score and so on. So last time the function over here, I just cheated. I made a single atom that took a bit vector representing the entire input from Psi and just come up with a number. I wanted to try to do that better and that's what I have this time. So this was the egg when you gave an event to the seed on the original Psi from the last update. It popped out into these nine atoms. Actually, it got a few more atoms before I'd shown it that image was old. Here's the new one. It's, I think, 15 atoms. And so like here, these are two weight matrices for Psi itself. PQ, how can I do this? I really got to get more practice. PQ is the poll questions as before. This VV is the voter vectors that go off in this direction. But now we've got this whole new bunch of stuff. Another weight matrix and FT, those are function terms. So now we have a distributed function as well. And this last little guy, this MI, that's a migratory seed. So one of the things I wanted on top of all of this, in addition to having it do the function optimization and all that stuff, it was supposed to actually be still movable, movable feast machine. But to do that using the existing code, you have to have a plate operator way up at 1-1 on the plate. Whereas in the Psi plate, all the action is in the middle. The Psi operator is right in the middle of the plate so that it, well, it used to be in the lower bottom center. Now it's kind of in the middle so that it can reach all the different, you know, parties, all the different stakeholders in the optimization process. So that MI, the migratory seed, knows it's supposed to work its way to 1-1. And then when it does, it decays into, well, anything that we want. In this case, it turns into a plate operator which then controls what's going on. So that was kind of fun. And again, in the biological distributed algorithms workshop, Mike Levin, who's a tough computer biology, wild stuff guy, I've talked about him before, gave a newer version of his familiar talk where, you know, talking about development and how you can mess with the cells in a, you know, frog development, the tadpole and all that stuff and have it still grow up into something that kind of works, even if it's nothing that evolution ever would have seen under natural circumstances. And part of that was, you know, you can put stuff anywhere and it'll migrate to where it belongs. So the migratory seed to make a plate operator get up there felt kind of like that. All right, so let's take a quick look at the current version. So now it's got a function value getting displayed automatically. And all of this stuff over here, this is the function. This is psi, just like last time, but this is a whole bunch of individual terms. Each one might be satisfied or unsatisfied and if it turns on or off. So this is a much more recent example of how psi actually works. Now look at that. It's flipping back and forth in what function values it's testing. That's a characteristic of psi. I really need to get the audio ducking here working because our theme song is pretty loud. But this will just go for a second. So the things to notice there, number one, the function value does not just go relentlessly up. It goes all over the place. It goes up and down. And now partly that's because the way that function was designed. Even if you ask for the exact same input, you might not get the exact same output. There's some chance that in those terms, each of which contributes a certain reward or penalty, if it gets engaged may turn on or off even if your inputs were sort of pushing it one way rather than the other. But in the bigger picture, that's also just the nature of psi. As we saw last time, it gets excited about seeing something and then it kind of freezes. And there's a period where you don't see the inputs to the evaluation function changing very much because psi is all excited about, hey, I'm getting 27, 27, 27, that's great. But then it gets bored with it and it goes back and selectively when it does get bored with something, it's biased by its design to consider the opposite. So there were a bunch of cases where the top was all on and the bottom was all off and then it kind of switched to the other way and kind of switched back and forth a little bit. And that's the way psi works. And as far as I'm concerned, it's a feature rather than a bug. Now, I mean, it kind of corresponds to, you know, electing all Republicans and then electing all Democrats and then electing all Republicans or something like that. But it's because the psi decision units use plus and minus one. So if they've learned something that was by being plus, I get this, and you start getting punished, they're actually biased towards flipping to minus and then going to the other side. So they're these very flighty voters. I need to put in an extra little thing there to record like the best thing it's ever found and obviously it jumps around a lot. But the other thing that was new was that it was showing the score itself. Last update, I had an Adam viewer from the simulator there. Whereas now we could run this thing incredibly slowly on the grid and see the function values being displayed there. And that's what our opening demo was also. Actual text being displayed as Adams on the grid. That came from, can I do this? That came from, yeah, let's try this. This is old, old code that I refreshed the Hello World program. This was first shown at a talk I did in San Diego. Actually it was where I met Andrew, I think. A long time ago, seven years ago, something like that. And you can change fonts. Maybe you can change fonts. Can we change fonts? Oh, there we go. Okay. So I refreshed this code for plate technology and that's what was used for the opening demo and what is used now for the live function display in the side demo. Okay. So that's what's happened on science and engineering such as it is. The useful valley, so there's this notion, it's kind of disputed called the uncanny valley that when you have robots or computer graphics of people, the idea is when they look like cartoons or stuffed animals or something, they can be super cute and everything. And if they could look absolutely indistinguishable, literally like people, then that would be okay too. But when they start getting close to being like people, but not actually people, it gets creepy, it gets really negative. And I feel like, you know, there's some argument about whether that's really true or whether people just got over it ten years ago. I mean, certainly it's been the case that, you know, I've seen some movies with CGI stuff that look kind of creepy to me. Even, you know, one of the toy story humans looked pretty weird to me when I first looked at them, but you get used to it. But the analogy I see here is, you know, okay, so the T2tile project is doing a whole new way to compute, at least that's the claim. And, you know, we've done cell membranes, we've done, you know, cloud computing, mobile mobs, all kinds of wacky stuff, you know, that grows and reproduces and does kind of, you know, living-y stuff one way or another. And, you know, that's fun. And, you know, I've got this background in artificial life which plays right into that. But the goal, as I say, over and over, path to useful, path to utility that I want to take this far enough, take this technology far enough that people will say, maybe there's enough potential there that we should invest in more serious research and development. We should get people that actually understand hardware to do a better version of the hardware, get people who want to better understand network communications to do all that, and re-walk this step. But taking indefinite scalability and the idea of doing local communications and being very limited in what we let ourselves see so that we're forced to redo a lot of work, which seems redundant, seems wasteful, but in fact, that's what makes robustness. That's what makes living systems, they're continually rebuilding themselves so that things can own what's so far wrong. So, there's this weird thing happening or I sort of sense it's starting to happen that when it was cell membranes and worms running all over the place, well, that was cool because it was like life-y stuff. But now, these plates, they're all square. Now we've got actual numbers and text appearing. And that was some non-trivial engineering to get all of this stuff working. Specifically, the plates and the text stuff and so forth of just the last update. But on the other hand, it kind of is boring because now instead of seeing it with a wacky evolutionary artificial life-eye, we see it as, when's it going to run spreadsheets for us? Then it starts to look like a really, really bad spreadsheet. One cell and running incredibly slow. And I'm not 100% sure what to do about that. On the one hand, absolutely want it to be path to utility. But I don't actually want it to be an Excel spreadsheet and that came up in one of the questions at the BDA. It's the sort of number one killer app question and the previous answer that we gave to this is take the grid, use it to control a simulated robot, use it to control a simulated system. And I'm still convinced that that's probably the best thing to do to break people out of thinking that computers have to be their servants that they type on and this computer just sits there and waits. No, the computer is out there doing whatever it is in real time. But it makes me a little more cautious about what to develop next. So we'll see how that goes going forward. But it leads to my plans for the next update. I think it's time to get back to the grid. It's time to get back to the hardware tile. It's time to get all of that event loop stuff back into my brain and try to figure out how we're going to get around the whatever seems to be going on that's causing the T2 tiles to glitch up, to hit, unwritten error cases that don't have handlers written for them because it sort of seems like, given the way the code was written so far, the only way to handle what we were seeing there was going to be user-visible. It was going to cause events to not do what the ULAM programmer ULAM infrastructure, the semantics of the ULAM programming language said it was going to do with its best effort. And yes, sooner or later there's going to be best effort and best effort is going to fail, but we want to delay that as long as we can. So that's number one. Number two, get ULAM 5 the next version out. It's two years late. Get serious about getting that back and ready to release so there'll be one, two packages and we can just start moving on that. So that is my goal for the next update. Get my brain going on the event loop, which was painful before, but I feel like letting it go fallow, maybe I'm ready to dig back into it and say, okay, what the heck, let's tear it up and put retries and timeouts in or whatever it ends up being and collect the language stuff, which is mostly there. So that's it. I'm sorry for the folks who started in and got it messed up. I need to learn how to use this stuff better. That's the other mechanism. That's the other message from all of this. And so I will see everybody, I hope, in two weeks and we'll see how it goes. I messed up two weeks ago in that I ended the stream. There were people actually hanging around chatting, which I wanted to chat with, but then I clicked away to another tab not realizing that that was actually going to shut the whole thing down. So I don't know if anybody's actually listening in today or whatever, it doesn't matter. But I'm going to be careful to let the stream hang out and I'll be happy to chat if anybody has anything to chat about. Okay, that's it. That's it. We'll see you all next time.