 Computers keep changing the world, but their power and safety is limited by their rigid design. The T2TILE project works for bigger and safer computing using living systems principles. Follow our progress here on T Tuesday Updates. This is the 45th T Tuesday Update. Let's get into it. Last time we were back from the ALIFE conference in the UK, talked about people instead of official nerd stuff. That was a strange thing, but it actually did pretty well. Talk about that briefly. Since then, I've been back to working on the T2TILE actually running events. Well, it's been running events, but actually running events that have effects between one tile and another. And that involves two steps. One is to get locks from one tile requesting the other tiles nearby to say, please don't do anything to this area of the grid because I'm having an event there and I'm relying on the current contents of what you have there so that I can decide what to do next. I'll tell you what changes I made and then I'll release the locks and you can go ahead. So it needs the locking step one. We've been working on that a few weeks before we went to the UK. And there's also the cache updates, which is the part in the middle there when we're holding the locks and saying, okay, here's what I did. I changed this. I changed this. I changed this. There's also an adaptive robustness feature in the cache update protocol. I haven't talked about before where even the parts of the event window, the 41 sites that may be affected, even ones that haven't been affected every so often, the cache update protocol will just randomly send one of the ones along, even though the receiver shouldn't need it. The receiver should already have that information, but they flag it as the fact that the receiver should see it and ends up becoming a check. And if it turns out that there was an error, that the receiver who was told that it should be this in site 50, site 35, whatever it is, and it turns out they had something else in there, they'll send back a request saying we have an inconsistency and then the sender will automatically note that and will start sending complete event windows adaptively until everything starts checking out again and then it'll automatically drain back towards efficiency. So that's one of the aspects of the cache update protocol that we get about being willing to have some round trip protocols rather than sort of a fire and forget approach to telling the neighbors what has happened. So we're getting closer to having cache updates move from one tile to the next. We're not quite there. I'll tell you where we're at. So, okay, so last week's video is number one of the last 10 videos. That's the way the new YouTube analytics wants to show it to us by number of views. Now, of course, you know, in T2 land, that means like 180 views. But, you know, that's more typically at about a week out. I'm lucky to have 150 views. And so we're 20% over the what used to be a normal high. Well, or what was a normal high over the last 10 episodes or so. So new folks, if you're here or put it this way, if you're back, welcome. And so that's what we've got here. I really have serious questions about the new. So here it is. So when you go to the new analytics for YouTube creators, this is on your dashboard. You can't avoid this. It tells you about, well, at least if you release videos regularly, it tells you this, ranking by views, number one out of 10, number one out of the last 10 videos, 180 views, which is up 43% compared to something. Although the average watch time, the average view duration is only seven and change. And the total watch time is also down. This always happens whenever you get a bunch of views or whenever I get a bunch of views relatively speaking. There's usually a bunch of people who are watching a little bit and then dropping off. Now, it's always the case where there's people that just watch like the first 10 seconds and go, whoops, either number one, this is not what I wanted or number two. I didn't even want to be here at all. I just sort of fell on the keyboard and the things started playing so they bail out. So there's a huge drop off at the beginning. But there's also this trade off that if social networking actually manages to work, if the click through rate gets up to 7% to 10%, whatever it is, which is great for a T Tuesday update video, that usually means that it's bringing in some folks that are not actually that interested. So in a way, I feel okay about this. I feel like it's supposed to be that way. And so it's good. So I should probably talk about people more, but then I have to do stuff. Well, we'll see what happens when coming up. Anyway, that's in the propaganda division. Also, we now, thanks to Andrew Walpole, he and I got together and said, why don't we just do this? There's all of these terrible things. There's robust.cs. There's the Gitter. There's the GitHub things that are all over the place. We had a discussion that was like, you know, and everybody's been talking about this, you know, that we really need a place to pull it all together. So we said, let's just do it. So I went out and I bought teachutile.org and teachutile.com. They're pointing at the same place. And Andrew got this, you know, whole thing set up over this past week. And it's really nice. He's got this great logo. I really, I really like this logo and even the rectangle aspect ratio matches the rectangle aspect ratio of the grids that are going to run on these tiles. And at the moment, it's really minimal. There's links to the T Tuesday Update YouTube channel where we are here. There's links to the GitHub for the super nerds. There's links to the chat room to get to Gitter. We need more text. I want a vision section to go in there. You know, imagine a world where, you know, computing, computer is a mass noun like water. Instead of a collective noun like fingers, you know, that sort of thing. You know, imagine a world where computers are constantly healing, repairing themselves. Imagine a world where computers, your computer gets to know you. Not because of passwords, but because of shared history, because of experience, because of working together to accomplish things. That is the world that the T2 tile project is doing the lowest level of research to build a computational stack that could actually do that in a robust way, rather than just sort of Hollywood false front set a world 100 miles wide and one pixel deep, which is what we have with traditional computing focusing on efficiency. So we have a place to pull it together. It's all wired up into GitHub so that just committing to master releases to the thing live. It's great. Andrew also set up a dev branch that has a different URL. I'll put it in the thing that we have more stuff that is sort of not ready for master yet. And it's got all this automatic stuff. You know, I made a commit and all of these things just, you know, fired off. And after it was happy, then bam, it was on master. It's really great. Okay, so that's the propaganda division. We need more content there. And of course we need a lot of written content. We need kind of a lot of written content for me, but I'm really conflicted between trying to write code and trying to write English. I want to make intertile events happen. So mostly this week, despite my best intents, I didn't write much text. I wrote code. All right. So that's the propaganda division in, oh, yes, in the 3D division. Even though, so haven't printed anything. Well, hardly anything. I printed some, some just little feet to stand up my little grid that I work with here. But the 3D printer was turned off for a couple of weeks solid while we were traveling. And I get back and I find this some kind of good. Some kind of goo corrosion glue loosening up something getting under the film, the PEI on the print bed. There's another angle on it. It's all around the thing. The thing was turned off. What the heck? I don't know what to say about it. I've just discovered it yesterday when I did go to do these feet. The print area in the middle seems to print fine. But I'm presuming that all of this, you know, brown crud is, you know, sort of globbed up glue that's, that's no longer doing its job of being a perfectly smooth, you know, monomolecular layer of glue. Not that it is. So one thing after another. Of course, this is my own thinking, right? That, you know, I have a new 3D printer still in the box. But it's a build it yourself. And I'm still afraid of it. I haven't done it. I haven't broken the box open yet. So maybe this is a little hint that going to have to budget some time for that. We'll wait until we get to the next sort of actual 3D printing go, which, you know, will probably be in a week or two. When we get back to doing facing the feet and how we're going to attach a power zone, a grid of four by four tiles to something that'll make it more rigid and make it hang together so that we can slap around groups of 16 tiles as a unit. That's coming up. So that's the 3D printing issue. The software division, that's the main thing I want to talk about. Of course, I've already blown half the time. So we'll see how long we've got here. Locks are actually really solid now. And that's great. And I've had just a spike working that was trying to do it. Now, actually, the FFMT2 is actually when it thinks it's trying to ask if intertile locks are necessary, which as far as the simulated version of the tile is concerned, no locks are ever necessary because it's only got one tile being simulated. And all of the locks that it would think were necessary would actually be simulated inside it as well. So there's only one. So as far as it knows, it never needs the locks, but now it's actually calling out to T2 code, which is actually grabbing locks between them at the rate that events are happening and so forth. And that's all working great. So the question is, can we move on to actual cache updates to actually get packets moving inside it? This picture, sorry for the glare, is a hack that I'm using because I kind of want to get the key master. That's the guy with the white case connected in all six directions so that I don't have to worry about when I have an event on an edge that's unconnected and deal with the edge cases, the literal edge cases too soon. But the problem is I also want to make sure I can get an ethernet cable in which has to go in through the west edge because that's the only place to access it. So this is my solution. I've chained two ribbon cables together. So the west side of the key master goes all the way over and joins up to the east side of this guy. Why this guy? Because he's the nearest guy that isn't already touching. I could plug into this guy up here, for example. He's got an east guy that's closer to it. That guy is already talking. He's already the northeast of the key master. So the goal is to have six different tiles. And this actually has it. And in fact, this is an example where I was just pushing out a new version of MFM using the Common Data Manager. And now it's actually saying west, northeast, northwest, northeast, east, southwest, southeast. It's actually pushing in six directions at once. And you can see it. There it is. So the big blue arrows means lots outbound. The smaller yellow arrows means less inbound and in each direction. So 2000 curly 10 means it's sent 2000 bytes in 10 packets. And the reason it's curly is because this is reporting statistics on the bulk rate, the curly out and the curly in. The right curly is output. The left curly is input of bulk rate, the stuff that we use to Common Data Manager use. And it's going in all six directions at once, you know. And it's good to remind myself every so often the progress that we have made because I'm always focused on the next bug. The next bug, it's not working. It's not working like that. So this stuff actually feeling pretty solid at the moment. So, all right. But how do we get cash packets done? We have to say, okay, we've grabbed the locks to the neighbors. Now we've done a thing. We've made some changes. We've made drag. Who knows? We did something. We need to tell the neighbor what has happened. All right. This is some stuff from the notes file. So it's time to go to England. That was two weeks ago plus. And then we're back. Boom. That's how files work. And so, you know, we've now got all of the hardware locks being handled by the Linux kernel module, which now has an interface to the software. And the way that we were doing it was we had this fake event thing, which was just a function, the temporary function I put in. That was trying to grab locks in reasonable ways. We've now gone beyond that and actually are getting the actually needed locks for the events that are actually happening in principle. So then the question is, is how do we do this? And this is something I've been going back and forth around and around on for weeks. That, you know, the MFMS simulator has this whole complex structure of there's a grid that contains tiles. Each tiles contain an event window plus six cash processors that connect into other tiles. And each of those cash processors has channels and so forth. And, you know, where am I supposed to come in? And I've been finding it incredibly difficult. This is part of why it's taken so long to get to cash packets actually traveling is that every time I try to say, OK, let's do it the MFMS way. I get into it and I find some mismatch. Like, for example, the cash processors expect to be dealing point to point with the neighboring tile, whereas everything, in fact, all the packets go through a single Linux kernel module and it's better to deal with them as a lumped abstraction rather than as six different individual ones and so forth. So I keep ending up going, screwing myself up and instead just building spikes, little test code, using what I have climbing up into slight more complexity. And that's what fake event does. It's just putting in a top on it to make it go. All right. So I worked on it over the last week and fake events started becoming slightly less fake and eventually I kind of figured out, you know, what I could do is I'll make sort of a delegate that represents the hardware, the T2 tile hardware and the delegate will be a class that has methods in it and the existing code wherever it seems I have to cut out and do something different from the T2. Instead of doing it directly, I'll call the delegate, let the delegate do it, and then MFMS can have a delegate that just calls back and does the original stuff, whereas the T2 delegates do something new. And that's what we actually managed to do. It took forever to, but we got to the point where we're actually going to take a cache update packet and send it to a neighboring tile. Now it turns out, because like I said, this whole hierarchy of stuff that already exists in MFMS, that each packet ends up being this tiny little thing, which was a real question because, you know, way back when, when I was first getting bytes moving through the proves, the inner tile communications and I was doing loopback tests to see what kind of speed I could get it in, the speed in terms of bytes per second was helped by having big packets. Having little packets was really slow. But the way things are set up, they're sipping little packets, which is like one atom. These things are like six bytes to get started or 10 bytes. When to get a good bandwidth to get good total information moving between the tiles, it was better to have 100, 150 byte packets. But I didn't, I left it as it was because the goal is just to get it working. And in addition, I've started to think, you know, again, I keep saying this, it's not about bandwidth. It's about latency. How quick can you get that first packet from one tile to the next? And by doing these tiny little packets, they're all pipeline that it could actually be the case that the neighboring tile is working on the earlier parts of a cache update while the later while the sending tile is still sending them. If we bashed them up into one big bulk, one big packet containing many little updates, then it would be a higher bandwidth. But the remote tile would have to wait until the entire packet arrived. So here's an example of, you know, we have this ship buffer as packet. This is when we've gotten the next little atom or whatever it is all the way forwarded down into a packet buffer. And the goal is just to move it from one tile to the next. Now we've replaced the direct call ship buffer as packet MFMS. That's the original version. But now we have our delegate, ITC delegate that we call a method on and let it deal with it. One of the things I ran into was that we already have these flash traffic, the stuff that allows you to say, you know, grid reset, grid shutdown. That's going as these flash traffic packets that were defined by these drive ops a couple of weeks ago. And it turns out once I went and looked to see what was going on, they were kind of inconsistent. The numbering was wrong that according to the document, you know, the special MFM packets are supposed to have a one in the first bit. And the MFM guys are supposed to have a zero there, but the driver ops were in the wrong place. So the driver ops were cancel op was at zero one, two, three, four, five and so on. So I went back and I renamed all of these guys and programming an awful lot of programming renaming things. It really is. I used to think that that was a waste of time. But now I think that's actually the meat and potatoes of what programming is. God's task to Adam is to name the fish in the sea and everything like that. Give everything name. That's what programming is to it's giving things proper names. And then once you evolve what the thing is doing, you figure out the names really aren't quite right. You're really not allowed to leave them the old things. They always remember that said really means clear. No, you can't. So renamed them. So we did it and we actually got to sending a packet and. We actually got to sending a cash update packet. Now it's just one out of it's not a complete cash updates. That takes a bunch of packets and so forth, but we actually did it. We actually sent a cash update packet for the first time just a few hours ago, really. So, you know, I was really crushing because I wanted to have it all working today. That's not all working, but we sent the first packet and and you can see it. So here's a picture. It's not the greatest picture, but if you look, we just zoom in here. So here west we've got 62. That's a curly bracket. That's bulk traffic. Curly curly bulk traffic in and out, but the angle bracket. That's MFM priority traffic. Six bytes outbound on northeast six bytes outbound on east in one packet each. The six bytes I happen to know is the begin packet where the guy is saying this is a begin packet and here is where the event is centered a little 16 bit six four byte coordinate of where in the coordinate space the event center is because all the cash updates that come afterwards won't say it. So there it is. And if we look over here, where is it? There it is. The guide of the west recognized it. So progress, progress. I'm glad you're here. The next update will be out in a week. Have a good week.