 Throughout history, humans have always pushed the boundaries of evolution, trying to maximize the output using smaller and smaller doses of input. But if you think about creating one algorithm which explains the history of the whole universe, that same algorithm might be expressed using the simplest program that calculates it. And the reality is that we don't yet have evidence against this possibility. The history of science is really a history of compression progress. This means that our understanding of the world supported by our intelligence and technology encodes information using fewer bits than the original representation. When you look at Kepler, the German astronomer and mathematician who peaked at the data points he received by looking at the planet's move, he realized that this large chunk of information can be compressed by predicting it. The prediction would happen through using the ellipse law, and this is how he figured out that all his data points were more or less ellipses around the sun. And Newton came in and realized that the same thing makes the apple fall, and that it works for all kind of other objects as well. So we are looking at a small code block which allows the compression of multiple observation chains, including of course falling apples. A few moments later, an Einstein's theory of general relativity sums up the unexplained areas of Newton's predictions, as the original theory had some small deviations. An Einstein's theory looks complicated at first, but you can actually write it up using a few sentences such as, It doesn't matter how fast you accelerate, the speed of light always looks the same. And from the simple idea, you can calculate and further compress other observations about the world, such as magnetism, global positioning systems, and how your old TV works. You can then look at recent history and praise and hate Steve Jay for the small pocket sized device with unlimited potential. And I think that short and sweet explanations of the past are most of the time in this form because they display something repetitive, a sum of patterns ready to be deconstructed and understood. And if we want to think about ourselves as intelligent agents, we should strive to improve and compress these formulations of history so that we can simplify and better understand our future endeavors and plan ahead. And you can compare this with learning keyboard shortcuts for using your computer. Takes a little bit of time to discover and adjust, as the list is often times long and boring, but once you add them into your default workflow, the rate of your idea to output generation process will accelerate. And the skeleton for a good understanding of idea distillation can be described using a few main principles. Our main goal as a species should be improving our prediction and compression of the exponentially growing data history, and to do that we can think and act on storing our raw history of sensorial observations, views, perceptions, actions and reactions, and also reward functions and signals using a simple tool such as a paper journal. Or maybe a more advanced one such as a digital journal with multiple functions such as time and fitness tracking and book shelving and idea documentation storage capacity. It's about documenting life not necessarily for transhumanism purposes, but for improving its overall quality. And computing the essence of human life is something that has been done, and it roughly translates to 3 times 10 to the power of 9 seconds. And with the human brain having 10 to the power of 10 neurons and 10 to the power of 4 synapses on average, where a synapse can store roughly 6 bits of information, we should still have enough space to conceal a human's life of sensory input. But the general idea should still be stable, meaning that striving to improve compressibility and squeezability to obtain an overly simplified explanation of any subject matter. If our goal is to understand the world, we should thus spend our time doing computation work and learning how to swiftly compress our information and also study how we can access it in times of need. And we, as intelligent agents, can learn to track all of these and compress the data we receive. Think about whenever we want to learn a new trick or skill. We summarize the same learning mechanism, the same learning path, then we adapt it and enhance it so that we can apply it to another set of future problems, and reduce the number of bits required to generate the corresponding innate reward. And if you think about the benefits you can get from your curiosity in proportion to the fast pace of your work and learning, you'll receive a cool number of remaining bits that can be used to compute something else. And in order to maximize your intrinsic intrusiveness and the reward coming with it, one is to learn how to look at the world using something like first principles and find new ways to look at things, ways we haven't thought of before. The individual will thus create a new and unknown learnable regularity, maximizing the learning curve and also the compression rate. And as long as you can predict the next thing based on what you have seen so far, you can compress it and do not have to store that extra amount of data. The history of science proves that everything is based on compression progress as you never arrive immediately at the shortest explanation, but you're always making small amounts of progress.