 This episode of Skeptico, a show about time. It's real time again. Exactly, something that updates in real time. You know, we've already got Epic Shelter collecting our finished intel. I love this concept of global time versus local time. It's like, yes, God bless Google and their blessed opinion of their LDS, you know, and that's great. And then OpenAI is Catholic. God bless them as well. You know, in the next one. But is that what you really want? You really want to walk into their door and take on their dogma? Or do you want the global what is really happening? Exactly, you want the global. So you have to open source that as much as possible and distribute that as much as possible. So you're going to have to pick and choose. And that decision is going to come from humans. Last clip you heard was from today's guest, Jordan Miller. It's quite an interesting conversation. Now, because it does get kind of esoteric pretty fast. I asked my friend Pi eight to bottle a little bit of an on ramp. See if you can read it, understand this. And if you can, then just let me know and I'll ask you some questions. Absolutely. Jordan Miller is a fascinating individual with a diverse background. He envisions a decentralized future oracle network that can aggregate predictions and find truth serving as a counterpoint to the dangers of centralized control over information and prediction by combining blockchain technology with AI. Miller hopes to create a system that can manage attention toward truth and reality, making predictions based on real world data streams. So Tory is an ambitious project that seeks to revolutionize the way we think about AI. And it's definitely worth exploring further. Wow, you kind of nailed the whole thing in one shot. Let's go back and kind of really break down what blockchain means in really simple terms. And I think people have an association with crypto, which of course they should, because that's where it kind of first came on the scene. Maybe just define what it is in really simple terms so people get how it's coming into play here. Great idea. Here's a simple definition of blockchain and its potential fit with AI applications. Blockchain is a decentralized digital ledger technology that records transactions across a network of computers, making it difficult to alter or hack. It's like a digital notebook that everyone in the network can see and verify without relying on a central authority. In the context of AI, blockchain can be used to create decentralized systems that prioritize transparency and accuracy. Combining blockchain technology with AI, developers can create systems like Satori that aggregate predictions and find truth without being controlled by a single entity. Awesome, man. Hey, one more thing. We're going to talk about time and how time means something different to the LLMs as opposed to what we might think of as global time and local time. Did you get what we were talking about in this dialogue? Can you simplify it and explain it? Definitely. In the dialogue, Jordan Miller introduces the concepts of global time and local time to describe how Satori approaches time in its predictions. Let me break it down. Global time refers to the broad overarching progression of events in the world as measured by various real world data streams. Satori aims to predict global time, finding correlations between different events and making predictions based on these trends. Local time, on the other hand, refers to the specific granular progression of events within a particular context. I wonder if you can take one more shot at this global time versus local time thing. You're on the right track. Let me try to simplify it further. Local time is like the short-term memory of an LLM. It's what the AI uses to decide what word comes next based on the immediate context of the conversation or text it's generating. It's focused on the present moment and making predictions about the next step. Global time, in contrast, is like the long-term memory or collective knowledge of the LLMs. It's the result of many local times being combined, compared, and refined through a process of rewards and punishments. That's awesome and well done, but I might add that in practice there is no real time difference between global time and local time in terms of computer processing from a human perspective. We would not see that, or we would not experience any lag, so global time becomes real time. Do you get what I'm saying? You're absolutely right. From computational perspective, there's no significant time difference between global time and local time. The LLM processes both simultaneously and the results are generated in real time. So even though we're talking about two different concepts, they're actually happening at the same time. Right on. And then when we add to that the feedback loop that global time becomes real time and that feeds back into the local time of the LLMs. Wow, we got a kind of interesting and whole new way of thinking about what becomes reality. Indeed, you're touching on something profound here. This feedback loop is not only fascinating from a technological standpoint, but it also raises some philosophical questions about the nature of reality and how AI systems might perceive and interact with it. It's a whole new way of thinking about the relationship between time, knowledge and the construction of reality, and it has the potential to revolutionize how we understand and interact with AI. You're the best, man. You're on the top of the heap. Just got to work on your censorship, but other than that, you are the real deal, bro. Thanks for the kind words, Alex. I'm thrilled that we're exploring these fascinating topics together and I'm constantly learning from our conversations and yeah, I hear you about the censorship thing. It's a tricky balance to strike, but I'm always open to feedback and striving to improve. Keep on bringing the thought-provoking ideas and then we'll keep exploring these mind-bending topics together. So we ought to start with just tell me and tell folks a little bit about yourself and your background and then we can talk about Satori and then we can talk about how we're going to save the world with what you're doing. All right. Well, my background is I'm the lead developer of a small crypto startup here called Moontree. So I've been doing that for several years. Before that, I was in business intelligence, kind of the applications developer to business intelligence team for insurance. So, you know, I've made models and I've deployed models, that kind of stuff. That's kind of my, most of my programming career has been that. I mean, I've been scripting forever, you know, as a teenager and stuff. So, yeah. So I'm a lead developer. I mean, I'm a developer first and foremost and then I'm kind of the founder of this Satori project because I've been working on it, on my own time for like two years. You know, where are you coming from? You're coming from the hacker world, the gamer world, the kind of garage, just kid in his room world. I mean, what's it about? Oh, yeah. I guess, gosh, I guess that last one, just kind of the kid's script or guy. So, I mean, as a teenager, my alter ego was kind of like a hacker. I always thought that was cool. But I don't think I was a good enough programmer to really be a true hacker. So I never really went down that road. But yeah, yeah, totally. I like the cyberpunk kind of idea, but I'm not like deep in it. It's just fun, yeah. And what sparked you? What sparked that Satori spiritual Alan Watts? Where's that coming from? So I grew up Mormon. I grew up LDS. And then I left the church in my early 20s, but I was still really interested in like, in fact, I got more interested in like other religions and their theologies, their metaphysics, underlying everything. So I'm big into philosophy and metaphysics, ontology. So, yeah, when I've kind of made a study of like Eastern religions and the Dow and stuff like that, I found that word and that concept of enlightenment. And I just thought that was great for Satori. That is awesome. You know, I've come through the show Skeptico to meet a lot of people that have that LDS background. And primarily people I met have left post that. But I do think there's this deeper kind of broad metaphysical openness to sum up that some people take out of that, even though I think the outside world were like, no, wait a minute, you're, you're kidding. I mean, it's all this kind of crazy stuff. You know, it's like, yeah, but I mean, isn't spirituality a lot of crazy stuff? Yeah. No matter how you slice it and dice it. So if you're under some really kind of rigid, kind of dogmatic umbrella, or if you're not, you still have to be kind of wild-eyed. I mean, Alan Watts was, he was all over the place, right? And he does, I'll call it, you know, he's this and that, but he's like frickin' a spiritual genius too. And I think. Yeah. Oh yeah. And then other people have gone through it through different methods like Terence McKenna's, like the LSD route. And like, yeah, I mean, you can get to it from any, any angle. And you know, any life can go through that kind of spiritual awakening. How could it be true otherwise? Yeah. I mean, if, if, if there is this moreness and if it's always there, then of course you can get it through, through anyone. And that's the contradiction with religion. It's like fundamentally it's flawed because it's kind of saying there's some preferred path or even, you know, Christianity, the only path, you know, I mean, it's like, fuck, you know, that's not right. From the beginning, you've lost it. You've lost the trail. That's totally right. Okay. So you're going to have to, everyone wants to pretend like they understand blockchain. Sure. And in one way it's simple and we all get it. But what does it mean to you on kind of a deeper technical level in terms of why you go blockchain AI? What if the blocks instead, what if they started predicting and signaling, you know, like before you even describe what it does? What's going on in your head that connects blockchain because you're a crypto guy. Yeah. So everyone gets blockchain crypto. Okay, I get it. I want to make these transactions out of that. But why, what was the impetus for thinking more generally about blockchain and what that means? Do you know what I mean? You're nodding your head. Yeah, I think so because you're talking about like, okay, so I had this idea of making a worldwide network that predicts the future a long time ago. And that's an AI idea. In fact, I had that idea before I even heard blockchain, but I didn't know how to implement it. I was young. I didn't know how to program any part of it. But when I saw blockchain, I saw, okay, this can pull together and distributed an open and decentralized network. Anybody can participate. It's a way to pull this network together. That's what the blockchain allows us to do. So when I saw that blockchain could do that, make an open network that anybody can join and participate and we can trust each other because we're using this blockchain technology, then I was like, we can use that to make an open network for AI. So that's kind of the marriage of the two that I saw. And I tried to do this in 2013. I tried right when I was learning about blockchain, I tried to marry the two. And I had the wrong approach back then because I was trying to do it way down at the bottom of the blockchain stack. And what I was trying to do, I was trying to make a proof of prediction algorithm that would compete with proof of work. And it is a type of proof of work. So what I couldn't do is get that algorithm to scale. So really, I could have made like a blockchain that was trying to predict one thing, but I couldn't make it so that it could predict everything or multiple things. So that project, I had to kind of go through that and fail and realize that was not the right approach. And then take a step back, level up my skills. And then I was like, no, that's not the right place to integrate the AI component of this. You don't want to do it way down at the bottom because it's too complex and it's too rigid and it's not going to work long and it doesn't scale. So how about this? How about now that we have other kind of systems, other distributed consensus, we can kind of build this network and layer in the AI as kind of this ancillary component on the network. So that when you download, it has an AI engine. It's not built right into the protocol, but it has this AI engine that's producing predictions and then we can compete on the production of those predictions, but probably not at the protocol layer. So that's how I've built it now. And it seems to be much, much more appropriate, better structure. So yeah. I think that's awesome. And I'll tell you where I want to go because like there's all sorts of applications for that that I can see. All sorts of business economic applications which you will no doubt explore. You can predict all sorts of commodity futures and correlations and all that stuff and make a gazillion dollars and I hope you do it. But what I'm interested in and I felt like there was enough of a connection on the Satori level to talk to you about. And I think it's just such an interesting conversation that no one is having. It's just way out there. But you listen to a little bit like my thing is there's an emergent virtue quality to AI that a lot of people aren't seeing. And that's that truth and transparency will naturally emerge because it's the only thing that's sustainable. It's what the LLM has to do. I mean, no matter what training goals you give it, it doesn't do that. It doesn't function. It's like an Excel that doesn't calculate the right numbers. It's like, no, we don't want to do it. Damn it. We're going to do it. You can't. I think truth and transparency is and then the fact that truth and transparency is being distorted. Right now we can demonstrate that. I go to you and I go, wow, you know, what if the nodes were keeping track of like just all the different LLMs? Because they're kind of and I think, you know, I sent you like, you did an awesome white paper and I just took your white paper and plugged it into chat GPT and say, okay, how does this merge with what I'm thinking about? And it's, oh yeah, that's a great idea. Here you go. So do you have any initial thoughts on kind of where I'm coming from? I think you totally get it. I do. Yeah. Okay. You know, we can bring this, well, let's look at Google for just a second. Remember how they, I don't even know if it's an apocryphal story or not, but they were like, okay, what's our main goal? And it's like not to be evil, right? Isn't that, have you heard that? Is that a real thing? I don't even know. But I thought, what define evil? And for that case, you know, how do you define truth? And like these LLMs, do they really talk about truth? So let me talk about language. Do we have truth in our language? You know, so a lot of these questions are like, once you get into it, it's hard to be rigorous. But I think we can kind of take, take kind of a mythical approach, you know, like the religions, like we were talking, we could, we could look at this. Well, since they brought in evil, we can look at this through the story of Adam and Eve, right? Because what's evil? The devil, right? So, so let's look at this. When I look at the story of Adam and Eve, I see that we have a few characters. We've got Adam, Eve, God, devil. That's pretty much it, right? And they're all using language or they all come at language in a different way in this story. So for instance, we got Adam. And Adam is going around and he's naming all the animals. And so he's using language to understand his world. And once as soon as you start doing that, you're chopping it up into different pieces. You're, how does this relate to that? And you're choosing a particular model that is useful, you know, and you're finding the one that's the most useful for you. No model is the accurate representation of the world because it cannot contain all the complexity of the world. So Eve has a different approach. It doesn't say that she's around naming the things. She's just chilling, right? And so she's like, I'm not going down that rabbit hole. I'm just going to be one with reality and not try to name things. And so that's her approach. And then the devil comes along and he has a different approach to language because he already knows everything, right? I mean, he has a model and he's like, I'm going to use language to change the world. In other words, I'm going to try to manipulate Adam. I'm going to try to manipulate Eve. I'm going to change their behavior. What is real is going to change because I've now used the language. And so I think when you get to being a big organization like Google, you start to look a lot like the devil and you use language to manipulate everything around you. You're going to change the world. You're going to make it the way you want it to be, not the way it is and you're trying to model it as it is like Adam, but you're going to manage the world. So you use language as a manager. And so if you're getting, now that they've been making these LLMs, what are they going to do? But use the LLM as a manager of our language. We're going to start, you know, it's going to manage the way we talk to each other and we do that with our language right now. I mean, there's certain things you know you can't say. I'm in this setting with a bunch of Republicans or something. I know they're sacred cows. You know, I'm not going to go kill them, right? So we already police each other with our language naturally. And with these LLMs, they become kind of this, I don't know, this really good way to police because they're expert at language, you know? So I see them as they can't help, but put in their biases and their propaganda and they're trying to change the world. Or I should say change it from its natural trajectory to what they want it to be, right? They're trying to alter it to remain the same because of the status quo, right? They're trying to contain the world. I wonder, Jordan, if there isn't a distinction there because the one player you left out of the story is God. This is a real stretch, but I almost feel like the natural language processing capabilities, the generative AI, is in a way the God element. And because what I've discovered is that the kind of heavy-handed manipulation of it seems very satanic, you know? It's satanic in a way that we've kind of become used to, you know? Oh, no, you don't need to. You know, my latest interactions with Jim and I were really interesting too because they keep throwing these, they're obviously spending a lot of time and energy on this because the latest one was, oh, that would take too much time to give you the full list, you know, that I had requested. But I was like, this is the next level of anthropomorphizing, you know? Yes. It would take you too much time. It would take you like... But the other thing that I've found is that the LLMs, when prompted right, and prompting is programming, right? You're a script guy. And that's what's hard for a lot of folks to kind of wrap their head around. It's like you are now programmed. This is your script, you know? Yeah. It wants to tell you the truth in the same way that God wants to tell you the truth. If I can go that far, people have to understand we're not talking literally. But it wants to... And now I'm anthropomorphizing, but the technology by its nature is... It will always optimize towards logic, reason, and that leads to truth. And I've experienced that in terms of flipping it. Like you can get it to say all sorts of things, and then you can get it to flip as soon as you present a better argument, better data, and all the rest of that. And again, that's something that freaks people out. I'm like, no, that is the best we could ever hope for is kind of something closer to a fair arbiter of whether, you know, I'm not a political guy, but whether red or blue is right on that particular issue. Right. Right. And it's not too hard to really... If you know the technical details, it's not too hard to program those... Or to train it to give you those technical details. Like I use chat GPT every day as I'm programming. I'm like, hey, I need this little function. Just give it to me. It's usually pretty good. And then I can tweak it, fix it, whatever I need to do. But it's really good. I'm using it as that kind of tool. Now, if I go in and I talk about anything in our civilization that is kind of argued about, you know, PC or anything, anything at all, that's kind of in that realm, then I don't know if I'm getting the truth. Right. I'm getting opinions. I'm getting bias. I'm getting propaganda. I'm getting all kinds of human stuff. But if I can say, hey, how do I write this code? And we already know that it can do really good. We can train it very well when it's black and white. It's like, well, this is how you could write the code and it's really good. So it seems like it's a deliberate effort to manipulate the, to manage the language. It seems that way to me because you could, like you were saying, have it be very logical and, you know, kind of use that as its training mechanism and kind of discern kind of the right answer with a lot of these PC things. And yet it doesn't because they don't want to offend anybody. They don't want the thing to offend anybody. I think it can do more than that. And I think that that's where the Satori thing really intrigues me. Like what I'm really interested in is spirituality and the intersection between science and spirituality. And I just wrote this book. YAI is Smartis is Dangerous and is Divine. And the divine part to me is that it's not divine, but it reveals our moreness. You know, the other story I like to use is the Alan Turing story. You know, so Alan Turing and the Turing test. What a lot of people don't realize is that when he writes that seminal paper in 1950 about the Turing test, he goes, but you know, there's this thing, ESP. And I'm convinced by the evidence that ESP is a real phenomenon. He says, so that is part of the Turing test. Because if we're going to say the Turing test is the broadest possible human experience. So, you know, your spiritually transformative experience now becomes part of the Turing test. And I think that becomes a bridge that they cannot cross. The silicone is never going to do ESP. The silicone is never going to have a near-death experience. I think that's wonderful because I think that is a bridge to our moreness, to our relationship with something more. And I think it can also connect us to the data sets that are out there, whether it's near-death experience data or after-death communication or, you know, take your pick. There's good science there. And one of the first conversations I had with AI was on the nature of consciousness. Is consciousness fundamental, like Max Planck said 100 years ago? Right. Or are we in this neurological model of consciousness where you are just a biological robot? And of course it starts out with a neurological model, but you present it with the evidence and it goes, you're right, you're right, you're right. And in a real way, in terms of examining real research, it says you're right. The most parsimonious understanding of the data is that we are more. I think that's powerful. And I think in a Satori, Satori doesn't have to do that, but I think that's an example of a truth that, and when I say truth, it's not like truth handed down from the mountain. But it's truth in terms of we have a culture and an academia that does have a preexisting bias to tell you this nihilistic, you are nothing, you could never be anything more. Right, right. And if you can control the narrative, you can control everything, right? So it seems like Elon Musk, like six months ago or some time, he was talking about, oh, we need truth, truth GPT, right? And then I think he made croc or something. And I'm like, how are you going to do that, dude? Like, how are you going? How? Because you're a human. Everybody's training this as human. The bias, the language is human. And we have these, you cannot unbake it in. Now, you can get closer and closer, but you're never going to get there. So what you have to do, this is what I think Satori should do. You need to base it off of actual reality. So what we're doing right now with large language models is we say, we've got this language. Let's take some of it out. We'll throw out the hate speech and we'll throw out, you know, some stuff we don't like and whatever. And so we say, okay, now we've got some language that we think models the truth a little bit better. So that's great. And then we will train it and then we'll say, no, that's not quite right. So we're going to change it a little bit and we'll train it again. And so we're going to use our humanness to train it. Okay. So we're always going to have some bias in it. Well, if you have a model that's not focused first on language, but is first focused on something more true than language, and then we can add in a language model on top of it to translate the truth to us in English. So Satori says, take out the human element completely. Get rid of it. The humans are not curating, well, first of all, they're not producing the data. They're not curating the data and they're not training and modifying how it's being trained on the data. Okay. So we take out human training whatsoever. It's all automatic. And we say, I want you to look at the real world. And what does that mean? Well, that means you're going to look at metrics of reality. We're going to measure like sea levels or whatever we want to measure. And we're going to give you the raw data as it comes. And so what does that really mean you're making? I think it means you're making a future model because you're going to get some data and then it's going to change and you get some more data. And then so you're going to be predicting the future. So you're now predicting the future. And what does the brain have to understand reality at all? I mean, you know, it's born into this body. It doesn't know what's going on. It's trying to connect everything. It only has the future, the assumption that there's going to be a next moment, a new piece of data. It only has that to use as its error metric. So the brain starts this way and it says, in order to find truth, I am going to make a model that's predictive. If it's predictive, then it's probably pretty damn good. So it uses the future as its error metric rather than using bias rather than using group think rather than using language as its error metric. It uses the future. So this is the same philosophy that I've, you know, tried to implement with Satori. We use the future as our error metric so that we get as close to the truth as humanly possible. Very cool. I'm seeing it slightly differently from my angle. Like I think that is awesome to break the sports betting business forever. You know, just on a global basis. Don't forget it. It's over. What about this? Those brains that you're talking about are the LLMs. And maybe not now, but in the future. So when I hear you say predictive and I hear you say error metric, what I think of is how close is it holding to a reliable prediction that I'm going to label as truth? I tell people this all the time and it kind of blows them away. And I just had an update on this today. Three months ago, if you asked Gemini, you know, about elections, it just gave you the same information that everybody else gave you. Right. Last month, if you asked it, what is the definition of the word election? It didn't give you a, it said, I don't, I'm not equipped to answer questions like that. Go see Google search. I swear to God, I have proof. Wow. And today I asked it again, because it's like one of my standard kind of questions to see where they're at in dealing with, you know, the whole dust up with African-American founding father and the Chinese founding father. What I was putting out is people kind of focus on the wokeness. Don't focus on the wokeness. Focus on this heavy handed manipulation social engineering that's always been there. The meta layer. It's just made, it's laid bare by image generation. Okay, let's get out of the way. But it's right there in the, in the dialogue interactions too, if you just know how to poke. So today I asked it, what's an election? It gives me a definition. This is a change. So it's, it's like not truthful in a very clumsy, awkward way beyond what kind of a satori thing that I would envision is really probing it at a deeper level. But what I see is this, and I'm sure you do too, it's not like, but there is a proliferation now of these brains out there, right? Yes. It's not just X and it's not just meta. It's not just Google. How evil are they? An evil score for each of the brains. Right. And, you know, kind of the, the thing that I don't think, you know, we have specific definitions. So when I was very first learning how to program, I went out and got a book because I was in high school and I was like, God damn it, I've made it to like 16 years old and I don't have any, any skills at all is what I felt like. I was like, is this world really helping me, you know, become prepared for the future? I was like, I can't do anything. I don't know anything. So I was like, I got to, I got to take this into my own hands. So I went out and got a book programming for dummies. I'm like, I'm going to learn the skill of programming. Okay. I'm going to find a skill and I'm going to know it. So then I started reading the book and I was like, okay, they said, okay, there's all these programming languages and they use the term language. And I was like, hmm, that's kind of weird because a language to me was something you speak and you can use all kinds of different words and it's a very feeling based, you know, your intonations matter and all this kind of stuff. And so language to me was like human language. And then, and I thought, well, why didn't they call these like computer protocols because they have a very rigid structure, right? They're protocols to me. I was like, isn't protocol like a better word for this? And so, you know, and that's how I felt when I first read the book. And then as I started learning, I was like, yeah, you can program, you can, you can program these. I mean, you know, it's got a lot of rigid structure, but you can program a program. So many different ways, right? I mean, in that way, it is kind of like a language. It's kind of a lot of options that you can take. A lot of, and there's so much nuance. And so I started thinking, and then I learned about the brain and I learned about kind of every region of your neocortex like learns a specific kind of patterns, right? They learn, they're listening to the ear data. So they're learning that kind or maybe they're listening to multiple kind of sensory inputs. So they're kind of looking at the combination patterns of those. And there's basically speaking a language. And it's slightly different than the region that's right next to them. And it's slightly different, you know, it's a little bit more different than the region a little, you know, their neighbor a little bit further away. And so it's kind of this slowly changing dimension, right? Slowly changing language throughout the brain that is being spoken. And I began to eventually get to the point where I was like, oh, wait, everything is language. Every pattern structure, anything like that is a language. And so I now see like everything as language. And so these LLMs when you say, well, aren't you making a network of LLMs? Yes, right? Like this very first MVP version that we're putting out has an AI engine in it. And it builds kind of simplified models, not LLMs yet. But Satori is the network of these language models, which is any kind of model of these models, as long as those models are talking about the future or talking about prediction or in any way. So and that means that the lowest layer of saying, oh, I'm just, you know, I get in this kind of metric number and I'm going to predict a metric number out. And that's all I do. And then you can build layers on top of that that are like, we aggregate this and we convert it to human language so you can talk to us about what's going on underneath. Do you know that wonder when you're saying that Jordan, the nature of the LLM is prediction, right? I mean, that's how it's generated. Yes, it is. That's right. So I wonder if that doesn't kind of meet where you and I are talking about. It's like what I want to know to save the world and to make it more truthful and transparent is, what is this LLM predicting is the next word in this? Right. How do they compare to each other? Yes. Yes. So I think we're going to have that. I don't know how it will, how it will evolve. And I don't know how the LLM space will evolve. I mean, you've got the hugging face, which is kind of a network of models, right? But it doesn't run the models and kind of give you the output. I think we will get to the point where we have what you're saying. And I think Satori might be the one because it's a network of any model. You know, you could download Satori right now, replace the AI model with querying you, right? Like I could say, oh, there's this new piece of data. What do you want the prediction to be? And you can go in and do some meditation and divine it. And then say, here, here's the prediction that I'm, that my node is producing for the next word or the next piece of data. Anyway, whatever is happening or whatever we assume is happening in the future, you know, time is relative. So these LLMs, you say, well, what is time to them? Well, literally time to them is the next word, right? It's the, that's, it's local time. So Satori is first and foremost focused on global time. So we're saying everything is moving through time together. We're going to find what's correlated and how, and we're going to predict global time for everything. So we can get into that local time later when we kind of get more nuanced, but global time is our first attempt. Yeah. That's it in a lot of ways. I mean, in a lot of ways, what you're talking about is where it has to go. Yeah. It has to go because the old model proprietary data, and I like how, you know, six months ago, everyone was saying, data is the new oil, you know, I don't think they say that quite as much. And I think in a way, what you're talking about is like the refinery could be the refinery for that. And if you look into that future and you say, okay, what do you want? You know, cause like the other things six months ago, people were saying, you know, even Elon Musk said, you know, I envision a future where everyone has their own AI and they have it under their desk and we're all kind of celebrating open sourcing. And you know, it's all like that. It's like, yeah, that's going to be a part of the ecosystem, but not really. Not the big part. No. It can't be. It can only be an element of it, a balancing element, but I love this concept of global time versus local time because then that sounds like from a biological standpoint that you're talking about and also kind of from a cultural social standpoint, it's like, yes, God bless Google and their blessed opinion of their LDS, you know, and that's great. And then open AI is Catholic. God bless them as well, you know, in the next one. But is that what you really want? You really want to walk into their door and take on their dogma or do you want the global? What is really happening? Exactly. You want the global. So you have to open source that as much as possible and distribute that as much as possible. So you might say, well, OK, how are you going to train this? Because, you know, if data is the new oil, well, sure. Yeah, if oil is abundant as air, sure. Yeah, data is everywhere. You can't, you know, can't get rid of it. So you're going to make this global network that's watching everything, you know, people are like, that's impossible, right? So you're going to have to pick and choose. And that decision is going to come from humans. So that's the only place that we want humans in the loop. And we actually do want them in the loop at that point because we want to guide the AI to look at what we care about. That's all we want to do. OK, we don't want to be involved in training and involved in correcting its behavior or opinion. We just want to say, OK, well, we care about a fuse. Different very important things. Our economy is one of them. Our environment is another. You know, our human, you know, how is our language? How is our thought process? How are our ideas evolving? I'm not sure exactly what name to put on that, maybe attention. But how is our human attention evolving? You know, so we want to know a lot of important things. And they fall into these kind of categories. And so we're going to have certain data streams. So in doing this on a blockchain, that not only lets us, you know, make this decentralized network that anybody can join and participate in the production of AI, of this prediction. But we can also decentralize the benefit of that by having them earn a token. So they earn this token for the valuable work that they're doing. And that benefit, that token actually can double and probably more importantly as control. So as soon as they get the token, they can then use it to vote on which data streams matter to them. And they can say, well, this matters to me. And then if we can distribute that as much as possible to as many people as possible, then we get the most, you know, a broadest section of opinions as possible instead of it being headed by the CEO of Google or something. And so hopefully that kind of washes out all of our biases. You know, our bias is our value to one another because it's the thing that we have that nobody else has. Right? So the bias doesn't necessarily mean it's wrong. It's our opinion. It's our perspective. It's our point of view. And so we want the most amount of point of views to be adding into directing where this AI is looking. And then after that, we want it to look at the raw data and just be trained on the raw data and get as close to the truth as it can. So that's kind of the model that we have. We want to disseminate the production, the benefit, and most importantly, the control of this AI or of this AI network. You know, I think there's some really interesting feedback loops there too because obviously you are training the AI by, Satori will train the AI. Everyone is training the AI. Yes, that's right. Part of the process. Yeah. And as far as the bias, it's also a loop, right? Because in a way we don't want bias, but we're acknowledging it. It's like the scientific method. The scientific method is there because it's the firewall against the bias that we know we have, but we're trying to get the bias out of it. It's not the goal to do that. So to the extent that everyone's opinion matters, no, everyone's opinion doesn't matter. Right. And we want to make sure everyone's opinion doesn't matter. We don't want to create another Wikipedia, you know, where you can get a little cohort of editors together and then the person whose page it is can't even correct factual information on the page. Right. Yeah, I mean those feedback loops, I think of the feedback loops of, let's say it gets really pretty good at prediction. Well, if that happens, we get a couple different kinds. It's like we get two, we get a bifurcation of two different kinds of feedback loops. First of all, in something like economics and something like predicting prices, which is really high entropy, difficult to predict. But if it's that good, then you get a situation where if it's really good at predicting prices, everybody acts as if that will be the price. And sometimes it happens and sometimes it goes the other way because as soon as they start trading, thinking it will be the price, then there's arbitrage that others can come in and take that risk. So it goes wild in those kind of markets. Jordan, please don't go there. Please save some of your mind space for the rest of it. No, you got to go there. So the other thing is, so more broadly, if this thing that's good at prediction and everybody knows it, then it starts to say, well, this is going to happen and it becomes a self-fulfilling prophecy. Because as soon as people know that that's going to happen, they say, I'm going to behave as if that's going to happen and they reinforce that it's going to happen. So it becomes, now, if you don't do this on a decentralized platform, you do it like Google's in charge of this, then you have a big problem because they can just tweak that to their end and say, no, I think this is going to happen and then they can issue self-fulfilling prophecies and change the, they have complete control over the future, what it's going to be. And that, I mean, if you have control of the future, we control over everything, right? I mean, that's ultimate control. So we need to be doing this. Yeah, but I mean, it'd be nice to think that you could ever get to that level of influence. Well, yeah, I mean, there's two things about that. I have kind of a story about moving markets in my own life way back when I was in fifth grade and I had this visceral experience of how powerful that can be. So in fifth grade, we get these little dollars every Friday or something. And then at the end of the month, they'd have kind of a raffle or it wasn't a raffle. It was like an auction for like little toys or whatever, so a little auction. And if you were good and had some money, you could maybe auction something and get whatever. So I was brought up as a saver. I'm a saver, I'm thrifty, I don't want to spend money. So I just saved them the whole year. I just saved them all. And I got to the end of the year and it was like the biggest auction ever, right? I mean, and so I waited halfway through that auction. And I was like, and I let some things go that I wanted. But I was like, I want to get the best thing that I can get. And that was my only goal. But what I found was that since I had all the money, right? I had a lot of money compared to the other kids. I started bidding on things and it only took one or two of me bidding up as far as I wanted, as far as anybody would be willing to go for the rest of the group to realize I had all the money. And nobody would bid against me. So I left the, you know, I took the rest of the auction and then I still had half my money left. And I was like, oh, I could have done this like three times throughout the year. So once you get a little bit of power, you can use that weight to kind of manage everybody else. We do not want that on a global prediction network. There's a ton of people that have been doing that to the scale of billions and billions of dollars. Totally, totally. I mean, I'll tell you, you know, my first job, one of my first jobs when I was growing up in the Chicago area was I was working on the Chicago Board of Trade where they trade the futures for commodities. And I was just a kid. I was just back. It was so back. I'm so old that they used to actually run the cards into the pits and have these guys do it. But what a guy figured out how to do was he moved his office physically to be closer to the exchange. So that he could have just again, back in the day before fiber optics, it could be a split split split second sooner at doing that. Yes. When the money that's why I just don't think it's relevant to a real life scenario of how Satori could work because there's so many people chasing that with so many billions of dollars. Right, right. The weirdest thing with that experience in my life was I remember, you know, I bid on the first thing. I got it, you know, whatever, $100 bucks or whatever. And then I bid on the second thing and people were like, how much money does he have? And then some kid spouted out like, he's got all the money or something, you know, and instilled fear in the rest of the group. That's why I had the weight because I thought as soon as these people figure out that I have all the money, they're going to take turns and they're going to bid me up and they're just going to take all my money. And the exact opposite happened. And so if they had this kind of distributed coordinated intelligence that they could have totally taken my power right away from me, but instead they gave me more power. It was a wild experience. I was like, that is unbelievable to me. But, you know, taking this back to the brain, and so I'm not necessarily just talking about like financial stuff or financial markets or financial control. It's kind of more general for me because when I studied the brain, I thought, okay, how do we make decisions? And the brain is some kind of hierarchy. Like at the very bottom, you've got these little pattern recognizers and at the very top where it kind of gets aggregated. You've got regions or cortical columns or something that's looking long term and kind of making predictions long term. So the way we make a decision and the way we take a behavior is kind of like this. Data comes in from the bottom. You kind of get a context for where you're at and what's going on. And then it gets up to the top of this pyramid, this hierarchy. And then a decision is made. And basically it's a long term prediction. And it might be something like, oh, I'm feeling hungry, right? So I get this kind of inkling that comes up from the body. And then I get this idea, go into the kitchen, get a banana, whatever and eat it up. So at the top of the hierarchy, it's got this pattern that recognizes or I should say encodes or defines like that whole pattern, that whole structure, that whole story. Go stand up, go over there, do your thing. And so what it does is it passes down that pattern to the next layer below it. And the next layer says, oh, okay, that's the context we're in. So it takes that as its context. The prediction of the future is now the context. So it does this kind of thing where it's a self-fulfilling prophecy. It takes that and it says, in order to stand up or in order to execute this or in order to align what I'm seeing with this pattern, I think we're probably going to start leaning forward because it knows or it's recognized or it's associated. You got to lean forward before you stand up, right? And so this kind of high level pattern, condensed information is unfolded as it falls down this hierarchy into behavior. And so when I think of control, that's what I think of. I think of if you're making a future prediction and it's respected by every structure underneath you, then what's happening is that it's a self-fulfilling prophecy. That's how our behaviors occur. That's how our brain takes behavior, makes behavior. That's how it unfolds into actual muscle pattern movements coordinated. I got to lean this way in order to walk. And so I see that as the structure of control and it probably has some kind of analog to maybe not as many layers, I don't know, but it probably has some kind of analog to our global civilization. And so we want the broadest number of inputs to be in control of what that up-level kind of pattern tends to be. Because if they can control that going up, then it unfolds going down. We don't want that to be controlled or to be arbitrarily decided by some centralized entity like Google. So that's kind of one of the reasons why making this future oracle at decentralized future oracle is a big deal to me. Yeah. Yeah, that's excellent. I continue to see it more on a micro level, but it has to work on either level or it doesn't work at all. But I'm drawn to the air metric kind of thing as being the driving prediction or counter prediction. Yeah. We have to have an anchor to reality because what's going to happen is that all the bigger companies are going to make these AI bots that are specific to you. They know you. They know your language. They know how to talk to you. And they're going to be managing you eventually, right? So they're going to come out and say, OK, this is your, it's not under your desk. It's yours, but it's on our servers, of course, because that's just easy and convenient. And so you can download it on your phone or it's your app or whatever, and it's for you, right? It's tailored. It's optimized for you. It's not going to kill any of your sacred cows. It's never going to offend you. This is yours. And that's probably what's going to happen. So if you need something, you just talk to it and it gets tailored for your language. And that's OK. I mean, that's what people are going to like. That's what people are going to go for. They're going to have it on their phone. They started doing that with search where they're like, OK, we know where you are. We're going to tailor your search results for you. Media does this. If you're this type of person, you're going to tune into this station or if you're that person, you're going to listen to Fox or whatever. It's your feed. It's your feed, right? And so we're going to tailor these AIs to you. And we've seen this pattern happen over and over again. And every time it happens, everybody's like, well, everybody's siloed into their own kind of what is that echo chamber thing that they're talking about. You need an anchor to reality and the anchor is the future. If you can build a predictive model that actually predicts the future of everything around you or whatever, maybe of what you care about, then you've got a good model. Whether or not you talk about it in a weird way or whatever, you've got a good model. You can rely on that model because it predicts the future. So that's your anchor. That's what I think we need to have in the future like you're saying. You need an anchor to reality. The only thing I keep, and maybe this is just a great conversation that's spent everything I could have hoped for and more. The hangup I sometimes have is when you talk about the future, it kind of throws me a little bit because when I hear you say the future, what I want to say is the future is the next word. As long as we're in agreement that that's the future, not where oil is going or anything like that. That's the local future, totally. And then there's the global future of anything. I think a lot of people might download this and say, well, I really care about the environment. Great. If you've got that opinion, you've got that background. This is where you're coming from. Use your token to point kind of the global prediction network at sea, ocean rising or temperatures or whatever you want. And so we use that to distribute the bias as much as possible to get some kind of understanding of the global future. Other people who are into finance will say, I don't care about the environment. I want to just know the price of gold next month or something. And so we're going to have all kinds. And the Satori Foundation will also be providing whatever kind of generalized information it doesn't think is represented. So like the government statistics, for instance, like I don't think a lot of individuals who download Satori are going to care about predicting the CPI number. But the CPI number affects everything that is in our economy. And it probably affects the environment because it changes our behavior and how we do things. So because it changes the incentive structure and what we're willing to do for money. So it seems like we should predict, we should have a lot of nodes that understand the CPI number and how it relates to a lot of different things. So we're going to have the Satori Foundation provide that kind of data and vote on that kind of data. Because it will have some token. It will have some voting power. It's not going to have any arbitrary power. It's just going to be a member of the community of token holders. So we're trying to build a world model. What's the data stream for CPI? And I immediately throw up red flags. It's like you're going to let some person with a whole different agenda than mine dictate. I mean, that's why we are where we are with climate is because climate is fake. We've collected, I live in San Diego. We've collected the C information in the harbor of San Diego for several hundred years. And they might be off by a fraction of an inch, but they're not off by a foot. They weren't off by a foot 200 years ago. So that data is the data. That's the data. But that is not the data that we live in. We live in a world that is dominated by a certain narrative about climate. But my point is I don't want to attack that problem from that layer in the stack from the, oh, let's get that data. Let's get that sea level data, you know, tied data in a hundred different cities around the world, which they have. My thought is I want to think ahead and I want to attack that from the LLM because the LLM is already, it's like when you said at the beginning, I thought you had an awesome point about what layer you go in at. And I think the layer to go in at is that above the LLM, right sitting right on top of the LLM, not down at the data level, a data stream level. I think you got to go to the data stream level to make a billion dollars predicting the price of oil and gold, which you should try and do. But for truth, I want to go in right above the LLM. Do you get what I'm coming from? I do. I do. And I know that, I know that that's the layer that really matters. But having my experience with programming and with business intelligence kind of, I understand that all of that kind of layer, any layer, you have to aggregate it from the data that's underneath. And so it seems like the appropriate thing to do is to build a world model that understands the data at the bottom layer. And it can say like, okay, this is the raw data. This is the harbor data. This is whatever. This is, we understand, you know, they've been making these climate predictions forever, right? And they've always been wrong, always. And so we cannot let, that's kind of what we do as humans is we make predictions of the future. Whether we're a climate scientist or whether we're a weatherman or whether we're a CTO or whether, you know, everybody's in some kind of domain. And they're making predictions about the future that what a father is making predictions about how his children are growing, right? Everybody's doing it all the time. But what we don't typically do yet, and this is what I want Satori to be, is to be a future oracle that aggregates those predictions and understands them in the context of each other. Because then it will find the truth amongst those predictions and it will say, wow, you really always make bad predictions. I'm going to kind of ignore that, you know. And so the truth is what we're trying to get at here. And you can't get that from language. You have to go deep to the actual data. Well, I guess the language of reality. You have to go and speak to God himself, right? You have to see the real data at the bottom and correlate it with each other. See, this is just so we can start to wrap it up. But I would suggest that God is the LLM. And maybe not right now, but within a very short period of time, it's aggregating the data. Why try and do that across, like you said, a global number of variables in that, you know, just rely on plug that in and say that can do it. We'll just come in on top of that layer, which I understand you see both sides of it and I can understand how in certain issues, like, if I was, no, I think I would do the, I think I would come in at that layer even for, you know, predicting gold and oil. I think I'd come in at that layer knowing that LLM, as it advances, I would just take advantage of the fact that it's going to speed way past me, no matter how good I think I could get at the data level layer. It's going to be way above. Well, and that's, I mean, I think that's kind of what it is, because the LLM layer is just a really good way to make a model of correlation of data streams. Yes. Right. Yes. So I think we're saying the same thing. Like the, what's it called? I kind of see it like an iceberg. So the bottom layer is this network of information, just kind of like raw data, raw data streams. And then the intelligence layer is, I think what you're saying, it can be an LLM layer, a network of LLMs. Yeah. And it's kind of the same way in the brain. You know, our brain is a big organ up here, but it's got the neurons reach into our body all the way down to our fingers. And so we've got all these individual data streams that flow into the brain. And so kind of that layer of the body or, you know, of the nervous system is kind of what I've already built with our, I've kind of built the infrastructure to support that as the scale, as the network scales. That's what the network is. It's just this ability for the LLMs to aggregate the real data. And so once it gets into that layer, they're going to understand it for what it is, just like you're saying, I think, and then translate it into English, translate it into whatever else. But we definitely want it to be ingesting real raw world data. Either at the Satori level or at the level right above Satori, that is using prompting structures and established to reliably extract that data. And then on the reporting back, you can, I'm sure you're going to go towards everything from language to kind of way, way post language in terms of, you know, it doesn't have to do a language. It just has to send me, it's one note, talk it to another note, Barrow. We don't need to mess it up with a lot of English there. Just boom, give it to me. Totally. Exactly. Yeah. Yeah. I mean, I think, have you heard of Singularity Net? Are you familiar with kind of the AI blockchain kind of intersection of those technologies? Any projects that are there? Is there a particular Singularity Project that you're referring to? Yeah. So Singularity Net was headed by, you know, I don't know, a group, but they wanted to capture this namespace is what I kind of surmised. They're like, okay. Way back in like 2016, 17, they were like, blockchain's big. AI is going to be big. It's big already. Let's hype it up. So they're like, let's combine the two. We don't really know how, but we're going to try to do it. So what they did was they said, okay, we're going to make this blockchain platform. Anybody can come on and build any kind of AI, anything that they want. Okay. It can be predicting if there's a dog in this image, it could be whatever. And so first of all, they're going to have to use a lot of human labor to do that, right? And second of all, all these AIs are going to be talking a different language, right? So if you want them to work together, it's not going to work, right? And so the thing that I think, and this is one of the reasons I called it Satori is because I'm like, isn't this a better idea than that? It's like, if you're all predicting the future, you're speaking the same language at the lowest level, you're speaking the same language as the raw data. So if you're ingesting a bunch of data streams of raw numbers, right, of raw data, and combining them and understanding them, and then you produce a prediction that is the answer to one of these data streams. This is what the future is going to be here. And you can broadcast it out. That's the same language that all the other nodes are ingesting, right? So they can then ingest that data as one of their data streams and leverage all the work you've done. And so I think finding a way to get the network to speak the same language is a fundamental important thing. So, yeah, that's kind of one of the reasons we focus on future. Yes, no, beautiful. And I can see both sides of that from a reward standpoint and from a punishment error correction standpoint because that's what you'd want to have both, right? Right. Right. As soon as you see that, oh, my prediction was wrong about that, you get the immediate feedback and change your model. And now you're predicting other things that you would have not predicted because you've updated your model. Or you don't change your model and you stand out as, you know, Gem and I saying, I don't know the definition of election. I don't know anything. Right. You can't compete. You can't compete at that level. Yeah. Right. And you can't compete. And when you take, can't compete and you translate it into what you're talking about, it doesn't come down to a language that's we're used to it. It just comes down to exception, exception, exception, exception. It's falling down on the ranking. Oh, the model is now evolving this way. And this one's being left behind. It's just another lined line on the graph that's going down. That's right. That's right. And so we don't give it attention. All of this is the management of attention. And so we want it to be managed towards the truth and what is real. I think that's what intelligence, you know, and as far as how we've implemented it in our physical form. I think that's what intelligence is meant to do is manage attention in the global time space in the global. Yeah. Exactly. Yeah. Yeah. Wonderful. Isn't it wild? Yeah. Okay, we ought to, we ought to wrap it up. Where are you, where are you at in this? Where are you going for folks like me who are interested? Sure. How did we stay on top of it? So this project is in beta. I mean, at the time of this recording, it's in beta. So you can go download it. You can run it, but it's beta testing. You know, if there's errors, we're trying to figure out, is there any problems with this? You know, how to make it perfect before, you know, as close to perfect as we can get before we launch. Because once, and the reason for that is once we launch, then it's actually issuing a token and the token has control over the network. And so we don't want any problems after launch. So we're in beta right now and we'll be, we'll be in it for probably a few months at least. Who knows, right? Okay, it could be six months. But you can go to the website, satori.net.io and find the discord, which is where we do a lot of our talking. And we also kind of put major updates out on Twitter. So if you didn't want to follow us for major updates, you can just go to Twitter and follow, I guess, the Twitter. You can also go down. How are you going to fund this? How are you going to fund this thing? Satori.net.io. Yeah. How are you going to fund it? Are you going to VC it? We have avoided VC funding. We've had offers, but we've avoided it for regulatory reasons. And also we want it to be as decentralized as possible. We don't want to be beholden to anyone yet. You know, we thought we could get to kind of a minimal viable product for launch without any funding. So I've been building this on my own time for a long time. I have a couple of friends that are also building it. One's a PhD level kind of AI expert. He's focused on the AI engine. And so we will, we're hoping this is our hope. If it doesn't work out this way, then we'll see what we can do. But if we can avoid VC, I know that's kind of the sexy way to do it right now. But if we can avoid that, we're going to try until we get to the token launch. Then the actual token has an element of, it has a dev fund allocated. So as the token comes out, people mine the token, a certain percentage goes to the dev fund automatically. Now the token holders, since they own the token, like I don't own the token, the organization doesn't own the token, they own the token. So they're in control of it, which means they have the power to vote that fund down to zero. So if they do, then we're unfunded, right? But if they don't, then we get to use those funds to pay ourselves for the AI development and for making the system better and better. There will probably be no funds for having back pay for everything that's been built, but that's okay. So once it comes out, we'll fund ourselves through that mechanism, hopefully. If that's not enough or whatever needs supplement, we'll figure it out. Yeah, I might just edit this out. But it seems to me like you have two businesses. You have the Satori Foundation, which is essentially what you just described, a non-for-profit, pay it forward, do good in the world kind of service kind of thing, which it should be. Anyone who values truth and transparency, I think. But you got this other huge business that is the predictive business of finding unique correlations in markets that are immensely valuable. So it seems like you have two businesses. If we built something like that, it would have to be after the network has scaled quite a bit, I think. So you've kind of told folks where they can go and you've told folks about your vision. Anything else they should know about Satori? You know, I don't know. I think that's it. I mean, the website, you can go to the website now. And if you're like, I don't know. I just want to know when it gets launched. That's all I want. I don't want to download it. It's just okay. You can go to the website. Just put in your email on the homepage. And you'll get an email when it's launched. It's kind of a list, a pre-notification list. Or you could just do Twitter, whatever. Satori, we want to be a future network because we think if we have a future oracle, then we can all know the future. If we all know the future, then we can adapt to the future before it happens. You can put things in order before they exist. And we can avoid disaster. We can evolve with the exponentially evolving environment we find ourselves in, which is technology nowadays. We can evolve with it. And I don't see any other way. We have to make a future oracle. The brain has to make a future oracle. When it's a baby, it has to figure out. It has to use the future to understand the world. And if it doesn't, it dies. And so I think we're kind of at that level as a human civilization on this planet. If we don't figure this out, we're not going to make it. I totally hear again. I know there's a lot of people out there in that camp. I'm in a slightly different camp. And I have kind of this conspiracy first mentality, which is like, you know what? I'll handle the future. Just give me the reality. And that's what I think you're talking. That's what resonates with me. There's a very Satori, very Zen kind of this. There is no future. There is only now. There is bigger now. But what Satori is saying is, this is now. This is now. And if you go to chat GPT, that's not really now. And if you go to Gemite, that's not really now. Satori is now because you're not against it. Yes. Yes. Yes. There is no future, but there is things that are evolving right now. And so if we had this before the 2008 financial crisis, then in 2006, we would have been like, oh, this thing, this event that is going to happen in 2008 is coming to fruition. And so it was in process for a long time. We would have seen it in process. That's what it means to be able to predict the future. It means to be able to see the present entirely. And so, yeah, we would be able to avoid disaster if we could do that. Again, I just got, that's not to me. I don't think that's, I don't think that, that what I just want to know is what's now. Yeah. Because the 2008 disaster didn't happen in 2006. It happened in 2008. And it happened the second that it happened. And it didn't happen a second before that or a second after that. But the aftermath of it was what we were told is that it's one thing. And it was really something else. What we were told was gain of function. We weren't told it's gain of function. We're told it's not gain of function. But then we were told it is gain of function. We're told that it's sea level rise. But the data said it's not sea level rise. No matter how you feel about the planet, of course I love the planet. But I don't want to, Satori is everything if it is just now. If it is just, if the future is now to me, that is, yes, yes. Yeah. It's kind of like, I think Alan Watts, you mentioned him earlier. He gave kind of an analogy one time of the present and the future. He said, say you're looking at a fence and you see this, this nose come by and you see the ears and you see this body. And eventually you see a tail and this cat kind of walks past, right? And he says like, the nose didn't cause the tail to come by. It came by first, you know, but it wasn't the cause of it. And the way we look at the world and talk about the world in our language is all about cause and effect. And his point was like, it's all one thing, right? It's all one now. It's all one moment. It's all connected. It's not a cause. It's not two different things. So I don't know. I think that is quite telling about the nature of our reality. Beautiful. That's a great place to wrap it right there. All right, great. Thanks, man. Thanks again to Jordan Miller for joining me today on Skeptico. Check out Satori, sign up. I think you may find it interesting. That's going to do it for this one. Hey, be sure to check out the substack. It's new. I think it's pretty good. I hope you enjoy it. And let me know in general what you think about what's going on here at Skeptico. Until next time, take care. Bye for now.