 The T2 Tile project is building an indefinitely scalable computational stack. Follow our progress here on T Tuesday Updates. So we have a second power zone. Now remember the whole idea if we're going to build the biggest computer ever just like it can't have a central processing unit, a CPU, it can't have a central power supply and that means at some point there's going to have to be separately powered regions of the overall computer that are going to have to talk to each other even though they only they don't have any common power source and that's what the power zone building the second power zone was for so that we could test interzone communication where they're just exchanging data and even though they have several power supplies that remains untested partly because that took a lot longer than I expected it to but there we are we'll have results about that next time. Yeah super late today again but so the quick overview is you know after A-Life 2020 it I have to confess it took me a while to get back into gear of working on the tiles you know I mean doing the the software engineering on the tiles is it's kind of a tough environment you know they're underpowered well they're great for what they are but for actually doing building significant software on the tiles which is how I'm doing it it's it's slow and painful and you know so I kind of dithered and did other stuff I worked on this visualization software stuff I'm not ready quite to show but also perhaps next time we'll see some of the results of that but long and short of it I did get going again there still are bugs there still are bugs at the Linux kernel module level although I found some issues there and there's bugs at the well there's incomplete code at the native engine level plus probably bugs but what got driven home to me by this whole process of trying to work through it again is that you know the easiest way to describe what I'm doing from a to a computer scientist is that I'm doing you know distributed computing this is just distributed computing at a particularly low level where we have a building trying to build a programming model on top of this distributed computer and the whole point that makes distributed computing hard is that you have to be able to deal with pieces of the thing failing you know if the server doesn't respond your web browser can't choke at least let's hope not and so the the sort of you know easy stuff that you can do when you're on a single host when you're in this in the land of cpu with serial determinism you can't do in distributed computing land and that's the goal here and so just like in distributed computing just like in network computing we're going to have all kinds of failures like you know like a tile crashes or your neighboring tile reboots and we don't where the computation is expected to survive and when you can't repeat anything you know and that's what it comes down to and we're going to admit there are going to be some failures that go on through we want to still be able to engineer out as many failures as we can upfront that's the best effort commitment and what just gets driven home to me over and over and over again that the way that you do that when you can't repeat everything because you're not in control of everything is you do it with log files and tracing so that you can figure out what happens after the horses left the barn so that they won't be able to get out next time so that you can make changes and they won't be able to get out at least not that way next time and that's what we're doing and you know people who've watched this project filed this project for a while you know tracing has come up over and over and over again all the way back to the proves although most of that took place before this the youtube channel started but also the linus kernel modules a whole big set of special tracing stuff that got built for that we're still fleshing out the tracing system for the native engine the last thing that happened was was that we have this thing that was generating data that we were weaving together using the weaver and so okay that was great and I was trying to use it and but it was turning out when I actually was like deploying it on little clusters of tiles it was causing the disk to mess up you know like it was writing to the disk too fast or something I'm not sure but that became a problem that blocked the ability to be able to use the weaving to find out what was going on after the fact so I'm in the middle of reorganizing the native engine tracing so that actually it's just going to keep a rolling buffer in memory and when a trigger event happens like a failure or some specified thing then we'll try to push the current snapshot out to disk and that's not a good idea in the linus kernel module level because once a failure occurs in the linus kernel module generally you're kind of gone whereas with the native engine we can actually generally crash under our own control and so we're going to see how that works step by step so that's it super late you know I keep I still say noon mountain time here you know there's no it's already afternoon and this hasn't you know haven't been edited or rendered yet you know the the nerd in me feels like I should change that to like 5 p.m mountain time or something but you know the manager in me thinks yeah well if we make that 5 p.m mountain time then the videos won't be ready until midnight what do you think in any case hope you're doing okay uh hope to see you next time