 The T2 tile project is building an indefinitely scalable computational stack. Follow our progress here on T Tuesday Updates. So some progress. I have changed my little test beam so that it keeps going in whatever straight direction it's going until it gets to the edge of the universe, can't go any farther, then it picks a new random direction, but now in addition, it's got a chance that gets bigger over time of checking to see if there's an empty spot next door where it puts another copy of its own type of beam, splits at the end of the universe. Let's take a look at it. All right, this is a test. I've got some of the lights on. I've got one power zone behind me. I'm going to turn off the lights and, well, I'll plug it in and then I'm going to turn off the lights and we'll see what we can see. Let's see the siphazole. Oh, yeah, I think it is. So, like I said at the story so far, one-year summary, generalized hardware anxiety disorder that something is going to go wrong in the electronics, this is something that's gone wrong. I have a theory, although I've looked at this before and I can't exactly figure it out, that the BeagleBone, the processor I'm using, it senses a few of its pins when you first power it up and it uses the values of those pins to decide what kind of mode it's supposed to be in, and it's possible if you have the wrong values on those pins to get it into a circumstance where it won't boot at all. And I knew about that. I have special design that's supposed to handle it, but this particular behavior that certain tiles won't boot like when they're connected to someone else that's already booted says to me that somehow there's some leakage currents or some voltages or something in there that I'm going to have to revisit, which on the one hand should be completely horribly distressing, but on the other hand it doesn't happen on all of the tiles, so worse comes to worse. I could just, you know, put them in a separate pile for a while and on the other hand I'm making enough progress, you know, I can sort of end as in sight of this phase of the T2 tile part of it itself that I'm getting a little cocky, like Anders said, that I feel like, you know, we'll solve it somehow. I have some ideas about it, but first, step by step. Living Computation Foundation, we had a couple of more folks contribute. Thank you. So now we're into the 30s. I haven't sent the official thank you notes. I haven't run the script again. I'm sorry about that. I'll get it in a couple of days. The conference is coming up next week. That's really soon. The tutorial on Splat and ULAM, which we're getting to the point that we'll be able to do demos in Splat and ULAM. We're not there yet that we'll be able to run on the grid. I mean, right now it's just a power zone, but there could be nine more of those that could be going, eight more of those anyway that could be going around it. That's Monday. Also, so last time I had shown this trace logger that would merge together, weave together trace files from multiple tiles and try to align them on time based on sync tags that are put into the trace files so that I could see what was going on. And in the last two weeks, I did a lot of trying to see what was going on. And so I would run them until somebody hit a break point and then I would investigate, well, but that turns out to be rather difficult to actually make work because all of the other guys that don't hit a break point, they have timeouts like we saw in the demo. And if you don't respond within a certain amount of time, they figure you've gone nuts somehow and they move on. A lot of times they just fail these days because, again, we're trying to do our best effort to catch as much of this stuff as we can. There will be failures, even when the thing is all done. It's all as good as it's going to be. There will be failures that the software level is going to have to deal with. That's the essence of best effort computing. But we'd like to catch as much of it, the best effort before we acknowledge that there will be some things that we can't get. So the timeouts happen, the failures will happen, those will get fixed. So really it turns out it was more useful to me not necessarily to be hitting a break point, but just to be getting as much trace data as I can, more and more, and then looking at it afterwards. And I found out that what I was spending most of my time doing was going through the weaved, the interwoven results, seeing what happened, and trying to reconstruct in my head, okay, we've got active event 23 on zero. We got active event 17 on one. And it's in state this, it's in state that, and so forth. And finally, I said, I got to make another tool. I found a bug and that eliminated a whole bunch of stuff. And then the next levels of bugs were significantly more complicated. And they also took significantly longer to happen. So I ended up doing two things. Number one, I created an interactive, curses-based version of the weaver. So the window down here is the woven trace files from, in this case, slash zero and slash one, two tiles that are involved. But up here we have an event window map where the program is integrating over the events over time to say, well, we've got a 21 on zero. We've got a 19 on one. It's in state this, so it's looking at each trace as we scroll through it and updating this, it's been a fantastic help. It's got limitations because it's doing this integral. And in particular, since there's so much trace data, I was blowing out the disk. There was not enough space on the tiles to store these gigantic things. I mean, I had a failure that happened after almost a hundred million debug events, after almost a hundred million trace events like that. So in addition to making it interactive, I also made it accept rolling buffers. So the trace thing can now just say, you say, give me eight megabytes of trace. And once you go over that, start deleting the older files automatically so that we can put a bound on it. And that means that the weaver won't necessarily be able to integrate everything properly because it's possible that the beginning is more than eight megabytes old. That's pretty rare, but it's a general problem that goes on. This helped me a tremendous amount to find the next bug. We still have more as we saw. We have more here. Oh, yeah. We also have in kernel panics, which I have yet to diagnose completely. Process ITC packet shipper that you may see up here. That's one of the kernel threads that I wrote that's dealing with the things. I've seen it mentioned a couple of times, but for some reason, I'm not getting very good logs. I reboot after the crash and there's a nice hole in the log file where more information would like to be. So this has not been localized yet. It's easy for me to believe it has something to do with a buffer running out that I didn't handle properly because it never happened. And now it's happening that remains to be seen. OK, and that's it. So the next update will be in two weeks. That will be after the artificial life conference next week. I want to make more demo spaces where I mean we're going to be getting to the point where we could write little bits of ULAM and Splatcode and we could run them on little chunks of grid. It would be fun. I'll see you next time.