 Hello everybody, welcome back to another AI video and this one it's a summary video of chat GPT and the nature of truth reality and computation. This is a recent podcast that just went up on Lex Friedman's channel and it's with Stephen Wolfram. And the thing is, is it's over four hours and 14 minutes long or something crazy like that. Now, if you have the time to listen to the whole thing, I highly recommend you do it. Very interesting, very entertaining. However, if you're looking for a summary, I have an AI generated summary with some of my own thoughts sprinkled in here and there. And what I'm going to do is I'm going to explain it in one hour chunks. I'm going to do that myself. And then after that, I'm going to have AI do a summary and then break it down into five minute components and have the AI read it out in its monotony type language. So here we go the first hour. And again, this is Stephen Wolfram chat GPT and the nature of truth reality and computation. This is Lex Friedman's podcast number three seven six in the first hour. Stephen Wolfram discusses the integration of chat GPT and Wolfram alpha and how they approach generating language and computing expert knowledge respectively. He delves into the challenge of representing the world in a way that corresponds to the way humans think about it. And the importance of symbolic representation and computational reducibility. Wolfram explores the concept of an observer in the computational universe and the limitations of science in capturing natural phenomena in all its complexity. He also discusses the process of turning natural language into computational language and the potential for programming with natural language. Wolfram concludes by examining the chat GPT's plugins ability to detect errors and rewrite code and the discovery of the laws of semantic grammar underlying language. That is the first hour in the second hour of the podcast. Stephen Wolfram explores the relationship between language and computation with language being defined by social use rather than standard computational documentation. Wolfram believes that the most complicated aspect of language is the poetic aspect that affects another's mind making it difficult to convert it into a computation engine. Wolfram also emphasizes that large language models have limitations since they cannot perform deep computation. However, language models have the potential to revolutionize education by allowing for personalized learning experiences. Wolfram also raises concern about the AI's role in determining objectives and the dangers of relying on the average of the internet to run society. In addition, Wolfram discusses the concept of intelligence and how it is a type of computation aligned with the experience of the world. The implementation of computation and abstraction is unique to different species and depends on the type of computation being used. So that is the second hour in a nutshell. The third hour is in this episode in or in this hour, Stephen Wolfram discusses various topics including the limitations of human perception compared to other species, the potential risks and the uncertainties of artificial intelligence and the nature of truth in computation and the democratization of access to deep computation through AI systems like chat GPT. He also talks about the future of programming language and the changes that automation has brought upon learning computer skills, as well as the challenges of formalizing the world and the importance of teaching computational thinking to everyone. Throughout the discussion, Wolfram shares his insights and experiences and offers a unique perspective on the intersection of computation, reality and truth. In the third hour, it summarizes like this in his interview, Stephen Wolfram discusses various topics related to computation, physics and the nature of reality. He talks about the need for clear and concise descriptions of concepts such as chat GPT and the importance of a uniform education in computer science. Wolfram also discusses his fascination with the second law of thermodynamics and his efforts to understand how complexity can arise from simple rules through the creation of artificial physics models. He examines the concept of entropy and its relation to computational boundedness and ultimately concludes that for existence to occur, there must be some form of specialization and coherence in the way we perceive the world. Very interesting. Finally, the last little bit, the last hour, but only about 15 minutes or so is, Stephen Wolfram discusses the interplay between computational irreducibility and the computational boundedness of observers, which explains the three fundamental principles of the 20th century physics. He believes that our perception of reality is a simplification rather than an illusion and that studying computational systems and, well, rulliology can give us a glimpse into the nature of reality. He reflects on his own inventions, which he believes will be central to what is happening in 50 to 100 years, assuming humanity does not exterminate itself. Wolfram is excited to be on the forefront of the development of chat GPT and language models, which he assumes to be 50 years away and is glad to witness their blossoming. Okay, with all that said and done, now I'm going to turn it over to the AI and we're going to basically break it down into five-minute sections and I will have that set up right now. Computer scientists and mathematicians Stephen Wolfram discusses the integration of chat GPT and Wolfram Alpha and Wolfram Language. He explains that chat GPT's primary focus is on generating language based on a trillion words of text produced by humans using a shallow computation on a large amount of training data using a neural net. On the other hand, Wolfram Alpha is focused on taking the formal structure of expert knowledge such as mathematics and systematic knowledge and using it to perform arbitrarily deep computations to answer questions that have never been computed before. The goal is to make as much of the world computable as possible so that questions that are answerable from expert knowledge can be computed. Five minutes in this section, computer scientists Stephen Wolfram discusses how humans are able to quickly figure out some things using their neural architecture while other concepts require the development of formalization such as logic, mathematics and science. Wolfram explains that to build deep computable knowledge trees one must start with a formal structure using symbolic programming and symbolic representations of things. He also examines the computational universe where even extremely simple programs can perform complex tasks similar to how nature works with simple rules yet still achieves complicated tasks. The challenge is to connect what's computationally possible with what humans typically think about which is gradually expanding as we learn more and develop new structures and ideas. Ten minutes in this section, Stephen Wolfram discusses the challenge of representing the world in a way that corresponds to the way we think about things and how human language is not necessarily a good representation of computation. He talks about symbolic representation and how it has served him well over the past 45 years. Wolfram highlights the importance of computational reducibility and how finding pockets of reducibility is critical to science and invention. The goal of science and other endeavors is to find these places where we can locally jump ahead and there will always be an infinite number of such places where we can jump ahead to a certain extent. Fifteen minutes in this section, Stephen Wolfram discusses the idea of reducibility in the universe and how we as observers seek out lumps of reducibility that we can attach ourselves to. This helps us to find a level of predictability in the world, which is vital for our existence. However, much of what happens in the universe is computationally irreducible and too complex for us to care about. Wolfram explains how the interaction between underlying computational irreducibility and our nature as observers leads to the laws of physics we have discovered. Additionally, he talks about the critical role the assumption of our persistence in time plays in our thread of experience in the world. Our minds seek out this temporal consistency to create a single thread of experience, which is essential to the way humans typically operate. Twenty minutes in this section, Stephen Wolfram and Lex Friedman discuss the concept of an observer in the computational universe. Wolfram explains that while consciousness and the idea of a single thread of experience is a specialization of humans, it is not a general feature of anything that could happen computationally in the universe. He explores the idea of a general observer and the importance of taking all the detail of the world and being able to extract a smaller set of elements that will fit in the human mind. They also touch on the issue of observational equivalence and the importance of distinguishing between a thin summary and a crappy approximation of a system. Twenty-five minutes in this section, Stephen Wolfram discusses how science can fail to capture the full complexity of natural phenomena. Using the example of snowflake growth, Wolfram explains how scientific models may get the growth rate right but miss important details such as the shape and fluffiness of snowflakes. He also dispels the myth that no two snowflakes are alike, explaining that the rules under which they grow are the same, but timing and environmental conditions lead to different appearances. Wolfram concludes that science faces the challenge of extracting relevant aspects of natural phenomena while preserving their complexity and detail. Thirty minutes in this section, Stephen Wolfram discusses the concept of modeling and how it deals with reducing the complexity of the world to something that can be easily explained. Wolfram explains that there is no one correct model since every model captures different aspects of the system, but they all provide some answers to questions. He also explains that in order to build a tower of consequences and understand natural language, we must use computational language or Wolfram language to formalize what we are talking about. By having a foundation of the computational language, it can help us build step by step to work things out. However, the interaction between natural language and Wolfram language is complicated since people post a variety of information on the Internet, and it can create the training dataset for GPT. Thirty-five minutes in this section, Stephen Wolfram discusses the process of turning natural language into computational language, where the front end of Wolfram Alpha converts prompts into computational language. Wolfram explains that the success rate of Wolfram Alpha has reached 98-99% for queries, such as math calculations and chemistry calculations. Wolfram also explores the idea of programming with natural language and shares an interesting story of a post written in 2010-2011 called, Programming with natural language is actually going to work, which was forwarded by Steve Jobs. Wolfram sees the limitations of learning programming languages and believes it is only a matter of time before the natural language prompts become more elaborate and the process becomes smoother. Forty minutes in this section, Stephen Wolfram discusses the importance of understanding computation and how it is a formal way of thinking about the world. He compares it to mathematics and logic and explains how if things are successfully formalized in terms of computation, computers can help us determine the consequences. Wolfram explains how a typical workflow for converting natural language to Wolfram language involves humans generating vague natural language descriptions of what they want to achieve and large language models producing Wolfram language code, which is then checked by the humans. If there are errors, humans will debug the code themselves, but the models can help provide hints to the debugging process based on the output of the code. Forty-five minutes in this section, Stephen Wolfram discusses the chat GPT plugin and its ability to automatically detect errors and rewrite code to achieve the desired outcome. The plugin uses AI to analyze code and output messages, examples, and documentation to determine what went wrong and how to fix it. Wolfram also talks about the fundamental science behind language and how there is a structure to language beyond grammatical structures. He believes that AI, like chat GPT, is able to understand language better because the Wolfram language was built to be coherent and consistent. He also compares the discovery of logic to the structure of language. Fifty minutes in this section, Stephen Wolfram and Lex Friedman discuss the evolution of logic and the discovery of an abstraction from natural language that allows for arbitrary word replacement without affecting the logical structure. They talk about Buell's algebra in 1830 and how it led to a deeper understanding of formal structures in language. Wolfram believes that chat GPT has discovered the laws of semantic grammar that underlie language and describes how neural nets in the brain are similar to those in large language models. He also suggests that while AI can perform many different types of computations, humans have decided to focus on the ones that matter most to us. Fifty-five minutes in this section, Stephen Wolfram and Lex Friedman discuss how humans identify and use specific processes in the physical world that they deem relevant to their needs. Wolfram compares this to the evolution of civilization where we identify specific things based on their usefulness to human purposes. They also discuss the potential discovery of laws of thought by GPT and how syntax alone is not sufficient to determine meaning in language as there are specific rules that allow sentences to be semantically correct. However, what constitutes semantically correct remains somewhat circular and is a complicated idea as seen in the concept of motion. In this section, Stephen Wolfram discusses the nature of meaning in language and its relationship to computation. He explains that words are defined by social use and do not have a standard documentation in computational language. However, words can be defined in computational language to make it precise enough to build a solid building block for computation. Wolfram also believes that human linguistic communication is complicated because it involves one mind producing language that affects another mind, suggesting that there is a poetic aspect to language that is difficult to convert into a computation engine. One hour and five minutes in this section, Stephen Wolfram discusses the role of natural language in communication, which is the great invention of the human species that allows the transfer of abstract knowledge from one generation to another. However, natural language is fuzzy and tends to rely on having a chain of translations from ancient language until what we have today. Wolfram also touches upon the long debated question of whether natural language and thought are the same and the relationship between thought, language of thought, the laws of reasoning, and computation. While large language models can do many things that humans can do, there are plenty of formal things, such as running a program in one's mind, that people cannot do as humans have outsourced this computation to external tools like computers. One hour and ten minutes in this section of the video, Stephen Wolfram discusses how different physical infrastructures, such as semiconductors and electronics, versus molecular scale processes like biology, can be representations of computation. When asked whether the laws of language and thought implicit in large language models like GPT can be made explicit, Wolfram explains that once we understand computational reducibility, discovering the computational aspects of language isn't fundamentally different from discovering the computational aspects of physics. He talks about how simple rules can do much more complicated things than we imagine and that it always surprises him. Wolfram discusses the low-level process of chat GPT and how it works, saying that it tries to work out what the next word should be, which is surprising to Wolfram that a simple, low-level training procedure can create something both syntactically and semantically correct. One hour and 15 minutes in this section, Stephen Wolfram discusses how language models such as chat GPT are able to produce coherent sentences and essays one word at a time. He explains that the model uses the probabilities of the next word based on the vast amount of examples it has seen and how it is constantly trying to choose the most probable next word. However, he also notes that there is not enough text on the internet to train specific prompts and as the length of the prompt increases, the less likely it is to have occurred. This is where models come into play and he shares how Galileo was probably one of the first individuals to recognize that mathematical models can be used to predict the way things work. Ultimately, neural nets are a model that successfully reproduces human distinctions and generalizes in the same way humans do. One hour and 20 minutes in this section, Stephen Wolfram explains the similarities between chat GPT and the original way that neural nets were imagined to work in 1943. He describes how neural nets always deal with numbers and how. In the case of chat GPT, it maps each word of the English language to some number and feeds those numbers into the values of neurons. Wolfram explains that the structure of neural nets is such that it ripples down layer by layer and that chat GPT has around 400 layers which computes probabilities that estimate each possible English word that could come next. He found a temperature parameter that affects the randomness of answers in the output and how the outer loop of writing the previous words is important. Wolfram shares that one of the unique aspects of chat GPT is its ability to recognize that an answer is wrong when fed with the whole thought even though it had come up with completely the wrong answer. One hour and 25 minutes in this section, Stephen Wolfram discusses the limitations of large language models stating that deep computation is not what large language models do. He explains that it is a different kind of thing and that the outer loop of a large language model is good for anything that one can do off the top of their head. Wolfram believes that large language models will reveal good symbolic rules that make the needs of the neural net less and less but there will still be some stuff that is fuzzy. Additionally, Wolfram believes that a small description that one can represent in computational language is always better than building giant computational language models that spool out the whole chain of thought which is a bizarre and inefficient way to do it. One hour and 30 minutes in this section of the podcast, Lex Friedman and Stephen Wolfram discuss the potential for language models and computational language to revolutionize personalized education. They describe a scenario in which an AI tutoring system can be used to teach individuals specific topics in a way that is optimized for their understanding. This could mean that specialized knowledge becomes less significant compared to meta-knowledge of connecting ideas and the big picture leading to a shift towards a more generalist approach to learning. Wolfram believes that humans will become more useful in fields that require a philosophical approach as technology takes care of the specialized drilling tactics. One hour and 35 minutes in this section, Wolfram discusses the impact of automation on specialized knowledge and the role of AI in achieving objectives. He explains that AI is best suited for automating mechanical tasks while humans are needed to define objectives. When asked if language models like GPT can determine objectives, Wolfram questions the basis for such determinations. He raises concerns about the dangers of relying on the average of the Internet and letting language models run society. Instead, he sees an interplay between the individuals' search for the new and the collective average based on high inertia. One hour and 40 minutes in this section, Wolfram and Friedman discuss the idea of using GPT-3 or a similar language model to define how the world should operate in the future. While Wolfram suggests that more prescriptive control may be possible when AI systems fully control the world, he also emphasizes the importance of human agency in making choices among the many possibilities that arise in the computational universe. They also ponder on the concept of human agency in a predetermined universe and the possibility that humanity is just a step in the larger scheme of things with the computational universe full of cooler and more complex things. One hour and 45 minutes in this section, Steven Wolfram discusses the relationship between AI and natural science. He argues that, although AI operates in a way that is not readily understandable by humans, the same can be said for the natural world. When AI becomes so advanced that their operations are beyond human understanding, we will have to develop a new kind of natural science to explain how they work. Wolfram also addresses the existential risks associated with AI, explaining that the simple argument that there will always be a smarter AI and that it will eventually cause terrible things to happen is flawed. He argues that the reality of how these things develop tends to be more complicated than one expects. One hour and 50 minutes in this section, Steven Wolfram discusses the concept of intelligence and consciousness as a type of computation that corresponds to a human-like experience of the world. He explains that there may be other intelligences like the weather, which is a different kind of intelligence that computes things that are hard for humans to do, but it is not well aligned with the way humans think about things. Wolfram also talks about the idea of rural space, which is the space of all possible rural systems, and different minds being in different points in rural space, including animals such as dogs. He explains that understanding how animals think and translating it into human thought processes is not trivial and that he once had a project of making an iPad game that a cat could win against its owner. One hour and 55 minutes in this section, Steven Wolfram discusses the possibility of different species having distinct implementations of computation and abstractions that are unique to their biology. While humans have become skilled at abstract reasoning, they may lose at games such as cat chess, which may require faster processing or different conceptual frameworks. Furthermore, Wolfram states that there may be things that have been important in the past, which we may no longer understand, as illustrated by the unidentifiable cave handprints. Ultimately, the smartest system may depend on the type of computation being used and may differ depending on the species implementing them. In this section, Steven Wolfram discusses how a perception of reality is limited compared to other species like the Mantis shrimp, which has 15 color receptors allowing it to see a much richer view of reality. He suggests that an augmented reality system that sees beyond the range of human vision could eventually become part of our understanding of reality. Moving on to AI, Wolfram acknowledges the potential threats it poses but is optimistic that there will always be unexpected corners and consequences, making it less likely that a superintelligent AI will completely destroy everything. He notes the importance of computational irreducibility and the fact that nature always has unexpected corners. Two hours and five minutes in this section, computer scientist Steven Wolfram discusses the potential risks and uncertainties in delegating too much control to AI systems, especially in terms of the unknown consequences and computational irreducibility. He expressed his concerns about the possibility of these machines wiping out humans, but he remains optimistic that AIs could emerge as an ecosystem. Wolfram also mentions the importance of considering the constraints on these systems, particularly on weapons and security issues. Furthermore, he discusses the impact and relevance of Wolfram Alpha's nature of truth and how it tells us information that we hope is true. Two hours and ten minutes in this section, Steven Wolfram discusses the concept of truth in computation, which is based on whether or not the output generated by a set of rules accurately reflects the real world. In terms of data curation, the operational definition of truth involves collecting accurate data to create a network of facts that are amenable to computation, such as data that can be measured by sensors or recognized by machine learning systems. However, the question of what is considered good is a much messier concept that may not be amenable to computation due to differing definitions of ethics and morality. Despite this, certain universal concepts such as murder being bad tend to emerge in human society and law. Two hours and fifteen minutes in this section, Steven Wolfram discusses the potential of computational contracts to dominate a large part of the world in the future and the responsibility of ensuring factual correctness. He also touches on the challenge of determining when something is true or factual and the risks of relying on computational language to expand into politics. Wolfram acknowledges that chat GPT writes both fiction and fact and has a view of how the world works, which may or may not be accurate. Despite this, he believes that computational language can accurately represent what happens in the world and capture its features as accurately as possible. Two hours and twenty minutes in this section, Steven Wolfram discusses the importance of large language models and how they can be used as a linguistic user interface. For example, a journalist with five facts could feed them to chat GPT and it could generate a report connecting to the collective understanding of language that another person can understand. However, sometimes the natural language produced by the LLM may not actually relate to the world as the user thinks it should relate. Despite this, Wolfram sees LLMs as critical interfaces, especially for working with large amounts of data. Two hours and twenty-five minutes in this section, Steven Wolfram discusses his experiences with using the chat GPT plugin kit and how it has made some errors like producing the wrong melody when asked to play the tune in a particular scene of a movie accurately. He talks about the reinforcement learning human feedback thing and how it makes the chat GPT well aligned to what humans are interested in. In conclusion, he shares that similar to building Wolfram Alpha. It is difficult to predict the threshold at which a program surpasses people's expectations and the chat GPT exceeded everyone's predictions. Two hours and thirty minutes being democratized and simplified through AI systems like chat GPT, which can allow people who have never interacted with AI systems before to access deep computation. However, in terms of truth and factual output, it's important to understand that chat GPT is a linguistic interface producing language, which can be truthful or not truthful. Therefore, while people may use fact-checking tools to some extent, the democratization of access to computation is the standout aspect of these language models and is essentially automating a lot of the lower-level programming that programmers have been doing for years. As such, it may shift the landscape of computer science departments and programming practices. Two hours and thirty-five minutes in this section, Steve and Wolfram discusses the potential future of programming language and how it may evolve into something more accessible to the general public. Using a linguistic interface mechanism, individuals in various fields of work can access computation, making it easier for them to understand and use. As a result, Wolfram questions what people should now learn in the world of computer science and whether the focus should be more on learning the trade of programming languages or the concept of computation itself. Additionally, Wolfram uses on the possibility of people not even having to look at the generated computational language and instead just trusting the output as it is generated more accurately. Two hours and forty minutes in this section, computer scientists Steve and Wolfram discusses the changes that automation has brought upon learning computer skills and what kind of knowledge is needed to control a computer. According to Wolfram, with automation, many activities that were considered to require human competency are now handled by computers. Therefore, a new set of knowledge is required to program a computer, which is having. Some notion of what is computationally possible. Wolfram also discusses the role of expository writing departments and universities and how training in expository writing helps control an AI. The discussion transitions to manipulating AIs and discovering deep truths concealed within. Two hours and forty-five minutes in this section, Steve and Wolfram discusses the possibility of there being unexpected hacks for large language models, LLMs, and how understanding the science of LLMs could lead to the reverse engineering of language that controls them. He also talks about the evolution of the computer science department and how it may not be necessary in the future as there is a greater emphasis on computational thinking for all fields, which he refers to as. Computational X. Additionally, Wolfram discusses how chat GPT is shedding light on the science of the brain and what still needs to be understood. Two hours and fifty minutes in this section, Wolfram discusses the idea of formalizing the world and finding a formalization of everything in the world, which he likens to logic same to formalize everything. Computational thinking is a formal way of talking about the world that allows the building of a tower of capabilities. The challenge is developing a pigeon between natural and computational language, which young people may learn as they interact with chat GPT. Wolfram shares his experience with young kids speaking Wolfram language and the challenge of making computational language a convenience spoken one. The spoken version of computational language must, however, be easy to dictate, but human language has features that are optimized to keep things within the bounds of our brains. Two hours and fifty five minutes in this section of the transcript, Wolfram discusses the challenges of parenthesis matching and how it becomes increasingly difficult for deeper computations. He argues that the human language has avoided deep sub clauses as our brains are not suited for it. Wolfram then delves into the importance of teaching computational thinking to everyone at varying levels. He believes that learning about formalization or computation of the world should be included in standard education. Wolfram also mentions his project to write a reasonable textbook about what CX is and what one should know about it. In this section, Wolfram discusses the need for a clear and concise level of description in understanding concepts such as chat GPT and the importance of a uniform education in CX, computer science. Drawing parallels to mathematics as a field, Wolfram suggests that while experts require a deep understanding of CX, there are others who need only a basic understanding to be able to apply it in their field. He notes that there may be a centralization of CX education in universities in the future and speculates that a year-long course may be sufficient for people to have a reasonably broad knowledge of CX. Three hours and five minutes in this section, Steven Wolfram talks about his personal preferences for candy and the importance of physical structure when it comes to food taste. He then moves on to discussing consciousness in relation to computation. Wolfram shares his own exercise of imagining what it's like to be a computer and how similar it is to the concept of human life. He then talks about his personal experience of getting a whole body MRI scan and how it made him realize that the folds and structure of the brain are the source of his experience of existing. He concludes by noting the similarities between a computer and a human being in terms of having memory, sensory experiences, and the need for communication with others. Three hours and ten minutes in this section, Steven Wolfram discusses the transcendence of experiences and how it might relate to computers. He believes that an ordinary computer is already capable of such transcendental experiences. However, a large language model may be better aligned with humans in terms of reasoning and thinking. Wolfram also discusses the possibility of bots becoming human-like and how it may affect the job industry. But in his personal experience, he builds tools and uses them, and as much as possible, he incorporates computers as a part of it. Three hours and fifteen minutes explain the second law of thermodynamics from first principles of mechanics. However, it remained a mystery. Steven Wolfram discusses the second law of thermodynamics in this section of the video and its principle that things tend to get more random over time. He explores the question of why this happens and why it is irreversible, going into the history of the law and the many attempts to explain it through mechanics. Three hours and twenty minutes in this section, Steven Wolfram discusses how he became interested in physics and the second law of thermodynamics in particular. He talks about how the first law is well understood, but the second law was always a mystery. When he was twelve years old, he received a collection of physics books, including a volume on statistical physics that claimed the principle of physics was derivable. He became interested in how molecules move in a box and attempted to reproduce a picture he saw in one of the books using a computer. However, he failed to reproduce the picture due to the limited capabilities of the computer. Three hours and twenty-five minutes in this section, Steven Wolfram discusses his fascination with understanding the creation of order in the universe despite the second law of thermodynamics, which states that orderly things tend to degrade into disorder. He sought to understand how complexity could arise from a set of rules and began creating artificial physics models, such as cellular automata. The irony is that these models do not work well for galaxies and brains, but they are excellent models for many other things. Wolfram also notes that these models are intrinsically irreversible, which helps explain the spontaneous creation of order from random initial conditions. Three hours and thirty minutes in this section, Steven Wolfram discusses his discovery of cellular automata and specifically rule thirty. Wolfram initially ignored rule thirty and considered it just another rule, but when he printed out a high-resolution picture of it, he discovered that it produces apparently random behavior despite having a very simple initial condition. This phenomenon is similar to second-neurodynamics. Wolfram also discusses the second law of thermodynamics, which states that the forward direction of time is when the orderly thing becomes disordered, but you don't see it in the world. Three hours and thirty-five minutes in this section, Steven Wolfram discusses the mystery behind the second law of thermodynamics, which describes why order progresses to disorder as time moves forward, but never the other way around. He likens it to cryptography, where a simple key can produce a complicated random mess. He explains that the second law is a story of computational reducibility, meaning what we can easily describe at the beginning requires a lot of computational effort at the end. He also speaks on being a computationally bounded observer, meaning we are not able to do a lot of computation when observing a computationally reducible system. The second law of thermodynamics is also this interplay between computational irreducibility and the fact that preparers of initial states or measures of what happens are not capable of doing that much computation. Three hours and forty minutes in this section, Steven Wolfram discusses the history of the concept of entropy. He explains that Ludwig Boltzmann, a prominent physicist at the time, initially assumed that molecules could be placed anywhere, but he simplified the situation by assuming these molecules were discrete. Boltzmann then used combinatorial mathematics to compute the number of configurations of molecules in a closed system and formulate a general definition of entropy based on that. However, it wasn't until the beginning of the 20th century that the existence of discrete molecules was confirmed with Brownian motion. Max Planck struggled to fit radiation curves with his idea of how radiation interacted with matter until Einstein came along and hypothesized that electromagnetic radiation might be discrete, potentially made up of photons, starting the quantum mechanics phenomenon. Three hours and 45 minutes in this section, Steven Wolfram discusses the history of physics, specifically the belief that matter, electromagnetic fields, and space were continuous. However, as scientific understanding progressed, it became clear that matter and electromagnetic fields were discrete. Wolfram believes that space is also discrete, with dark matter as its feature, but the challenge is finding the analog of Brownian motion in space to reveal its discreteness. Wolfram also explains that entropy is the number of states of the system consistent with some constraint, and if the configuration of molecules in the gas is known, the entropy is zero because there is only one possible state. Three hours and 50 minutes in this section, Wolfram discusses the concept of entropy and how it relates to computational boundedness. He explains how important it is for an observer to simplify the complexity of the universe in order to make definite decisions, and how this process reduces all the detail down to one thing. Wolfram also speculates on what it may be like to be an unbounded computational observer, and states that such an observer would be one with the universe, without experiencing things the same way as humans. Finally, Wolfram and the host discuss the idea of Rulliard and the space of all possible computations. Three hours and 55 minutes in this section, Steven Wolfram discusses the idea of existence and how it requires some form of specialization. He explains that if we were spread throughout the entire Rulliard, there would be no coherence to the way that we work, and we would not have a notion of coherent identity. To exist means to be computationally bounded, and to exist in the way we think of ourselves as existing, we need to take a slice of all the complexity, just like how we notice only certain things despite all the molecules bouncing around in a room. Wolfram notes that the fact that there are laws that govern these big things that we observe without having to talk about individual molecules is a non-trivial fact. In this section, Steven Wolfram discusses the interplay between computational irreducibility and the computational boundedness of observers, using this to explain how all three fundamental principles of 20th century physics gravity, quantum mechanics, and statistical mechanics are derivable. He notes that these laws require one more thing observers with the characteristics of computational boundedness of belief and persistence in time, which implies precise facts about physics. He explains that, given the unique object that is the Rulliard, or entangled limit of all possible computations, our perception of physical reality is inevitable, and our perception of reality is a simplification rather than an illusion. For hours and five minutes in this section, Steven Wolfram discusses the nature of truth, reality, and computation. He argues that, while the existence of the universe transcends the limits of scientific knowledge, there is something larger than us that objectively exists as part of the whole set of all possibilities that make up the universe. He also discusses the idea that our experience is a tiny sample of the universe, and that there is an infinite collection of new things we can discover within the universe. Despite the limitations of human life and cognition, Wolfram suggests that studying computational systems and Rulliology can give us a glimpse into the nature of reality. For hours and ten minutes in this section, Steven Wolfram uses on the idea of cryonics, and how humanity's priorities and interests change over time. He reflects on his own inventions, which he believes will be central to what is happening in fifty to one hundred years, assuming humanity does not exterminate itself. While it is good to stay engaged and interested, he acknowledges that it can also be a mixed blessing to be constantly inventing and figuring things out. Nonetheless, Wolfram is excited to be on the forefront of the development of chat GPT and language models, which he assumes to be fifty years away, and is glad to be able to witness their blossoming.