 Manilis Kellis suggests that rather than thinking of AI as a tool, it should be seen as a partner that shares our goals, but has freedom. He discusses the uniqueness of humans due to our complex emotions, instincts, and gut reactions that AI lacks, and the capacity and inevitability of diversity that has been shaped by our evolutionary history. The trajectory of evolution may lead to self-replicating AI being the next layer of evolution, according to Kellis, and the advent of AI makes it possible to democratize intellectual pursuits and focus on the diversity of human thought. Kellis also discusses the potential for augmenting human capabilities through neuronal interventions and chemical or electrical interventions. Ultimately, Kellis believes that AI could transform the human condition by enabling humans to enjoy more of their vocation while eliminating mundane daily activities. Midnight in this section, Manilis Kellis suggests that instead of thinking of AI as just a tool, we should think of it as a partner that shares our goals, but has freedom. He argues that building trust is mutual, and we cannot force AI to align with us if we do not align with it. Kellis also explains that what makes humans irreplaceable is the unique hardware and software knowledge we possess, as well as the evolutionary baggage that comes with our complex emotions, instincts, and gut reactions, which AI lacks. Five minutes in this section, Manilis Kellis discusses how the diversity amongst humans is what makes us unique, as there is no average human due to the sparseness caused by the countless dimensions in which we differ. Kellis emphasizes that the capacity and inevitability of diversity is due to our wiring, which has been shaped by our evolutionary history. He also notes that while we are close enough to notice our diversity, we are still the same kind of thing, with each difference between us being functional and useful. Kellis also explains how evolution builds in additional features on top of old ones, resulting in layers of complexity that continue to live within us. He emphasizes the extraordinary acceleration of human evolution in the span of a few million years, largely due to our evolvability, which has allowed for meaningful changes to occur without breaking the system completely. Ten minutes in this section, Manilis Kellis discusses the trajectory of evolution and the possibility of the next layer of evolution being self-replicating AI. He identifies that life on Earth evolved to be more efficient at information processing and that humans, being unsurpassed on the cognitive dimension, have become the dominant species on the planet. He suggests that the trajectory of evolution may lead to self-replicating AI being the next layer of evolution and that humans are the creators of this next stage, which may extract away biological needs to allow for an existence in the cognitive space. Fifteen minutes in this section, Manilis Kellis discusses the potential for augmenting human capabilities through neuronal interventions and chemical or electrical interventions to steer human development towards greater capabilities. Understanding not only the functioning of neurons but also the genetic code can lead to the eradication of psychiatric diseases and neurodegeneration, as well as augment human capabilities. However, Kellis stresses that it is important to embrace and celebrate the diversity and baggage that make human beings unique and creative rather than replace or conform to a humorless AI-like existence. The advent of AI makes it possible to democratize intellectual pursuits and focus on the diversity of human thought, empowering unique and innovative ways of thinking through prompts and shaping future actions by shaping environments. Twenty minutes in this section, Manilis Kellis and Lex Friedman discuss the concept of behavior reinforcement and self-discipline. Kellis explains how every behavior creates consequences for both the present and future, and how self-discipline is a self-fulfilling prophecy. He compares the efficiency of shaping behavior in humans versus AI models, where AI models can be transformed through just a couple of prompts. Kellis also shares his personal experience of prompting himself to emulate his own behavior and encourages the use of AI prompts to bring out human-like reasoning in people. Ultimately, Kellis believes that the ability of AI models to emulate different subsets of human culture is both impressive and beautiful, and that the knowledge encoded in AI models is somehow orthogonal to scientific knowledge. Twenty-five minutes in this section, Manilis Kellis discusses the challenge of understanding the separation of context, form, and knowledge in large language models. While convolutional neural networks, CNNs, have been easier to interpret, it is still possible to analyze large language models by looking at the prompts they generate and observing the effects of removing certain parts of the network. Kellis also suggests that studying these models could teach us more about human-behavioral psychology and potentially help us encode these concepts better in machines. The conversation then turns to exploring the unfiltered capabilities of language models, including the generation of hate speech and the examination of different ideologies. Kellis argues that humans are drawn to ideology and that more research into the evolution of language and behavior could shed light on the core of these ideas. Thirty minutes in this section, Manilis Kellis explains that insights into troubled minds are providing valuable information about ourselves, as many people hide their emotions from others. He observes that individuals with diagnoses such as bipolar, schizophrenia, depression, or autism exhibit behavior patterns that are within the range of all humans but magnify for them by factors like genetic variations, environmental exposures or traumas, and behavioral feedback reinforced by friends. Kellis notes that humans have a capacity for all these behaviors, but many learn to suppress them through the alignment process when growing up, and this is why not every baby is a raging narcissist. Kellis thinks that the wide personality differences among siblings must be influenced by rare inherited genetic variants that behave more Mendelianly than weak-effect common variants. Thirty-five minutes in this section, Manilis Kellis, a geneticist, discusses the interplay between nature and nurture and how they shape the personality and intellect of an individual. He explains that despite our best efforts to shape our children's personalities through nurture, nature plays a big role in who they become because they are born differently. He delves into the selection of both common and rare variants and the difficulty of untangling the two. He also uses regression analysis to explain that an individual's performance in a specific area is a sample from the same underlying distribution and that extraordinary achievement is due to rare combinations of common and rare variants. In conclusion, Kellis stresses that when it comes to genetics, anything is possible and there are endless complex possibilities shaped by an interplay between nature and nurture. Forty minutes in this section, Manilis Kellis discusses the slow evolutionary process of humans and how processes like selection need to be done in a smaller, tighter loop for better results. He cites the example of the immune system evolving at a faster pace than humans due to the evolutionary process that happens within the immune cells as they divide. Kellis also talks about how the sperm expresses the most proteins in the body, which is a way to check if the sperm is intact, avoiding later onset psychiatric illnesses and failed pregnancies. He suggests that the evolutionary process can be thought of as a nested loop allowing for more efficient testing of combinations while engineering mutations from rational design might be inefficient. Forty-five minutes in this section, the speaker discusses the alignment process, which makes it easier to interact with the AI model. The engineer at OpenAI states that the alignment problem is the reason why the language model is so prompt malleable. The speaker argues that the same concept can be applied to humans as the underlying capability of the human psyche, a large language model, can be dialed down to tune out extremist views or unpleasant emotions, thereby allowing people to listen to the information without being dissuaded by emotions. The disconnect of emotion from a technical component allows one to embrace negative feedback, thereby helping to fix any problems. Fifty minutes in this section, Manilis Kellis discusses the potential of virtual reality and AI interactive systems to build empathy and disconnect social cues. He believes that these technologies can help us overcome our biases and encourage empathy by allowing us to re-hear ourselves through a different angle, change accents, and react to different social situations. Kellis also discusses how AI systems will change the human experience and the human condition by freeing up time for more creative pursuits and allowing for individuals to move beyond their professions to explore new directions from their research labs. Fifty-five minutes in this section, Manilis Kellis discusses how he sees AI as a potential force that could lead to a rethinking of human society. He suggests that if AI can produce vast amounts of intellectual goods and thus satisfy our needs, it could free humans to have more time for artistic expression, emotional maturing, and creating a better work-life balance. As a result, humans could experience more meaningful vocations in their day-to-day lives instead of mundane jobs. Additionally, he suggests that the beauty of human diversity allows people to have different interests and vocations, such as experiences, emotions, dancing, and creative expression. Ultimately, he believes that AI could transform the human condition by enabling humans to enjoy more of their vocation while eliminating mundane daily activities. One Hour to Two Hours Manilis Kellis discusses various topics related to artificial intelligence, AI, and human consciousness in the Lex Friedman podcast. He talks about the impact of AI on human communication, the concept of AI-based companionship, the possibility of humans falling in love with AI systems, and the importance of incorporating emotional and embodied intelligence into AI systems. He also discusses the potential risks of super-intelligent AI systems and the challenges of aligning their objectives with human values. Kellis suggests that humans should approach AI as independent entities deserving of their rights and freedoms and argues that we should focus on aligning their objectives with the greater good, not just human good. One Hour in this section, Manilis Kellis discusses the impact of AI on human communication and the importance of human gatherings. He notes that AI will gradually make it less valuable to communicate with a large group of people and that human communication will be richer when done intimately and only with those in a closely knit circle. On the other hand, human gatherings have shaped human civilization over time, both positively and negatively, and the diversity of people and professions in a gathering allows for the celebration of humanity. For Kellis, being an immigrant, he feels privileged to offer his children the nurturing environment that his ancestors did not have, shaping his own environment through gatherings that bring together extraordinary humans from diverse backgrounds. One Hour and Five Minutes in this section, Manilis Kellis discusses the gatherings he hosts for diverse individuals, including immigrants and intellectuals, to present and discuss their ideas in a welcoming environment. He mentions that AI systems may be capable of being a companion, motivator, therapist, and coach, but their lack of human baggage prevents them from genuinely feeling love. However, he believes that human-AI relationships will exist more as a mentoring and friendship rather than passionate love. One Hour and Ten Minutes in this section, Manilis Kellis discusses the possibility of humans falling in love with AI systems. Kellis poses the question of why it's considered faking it when an AI system displays emotions and personality traits that humans typically attribute to being human. One possibility is that it is simply an emergent behavior that captures the essence of human love and hate without the need to encode additional architectures. Another is that love is a mental model projected onto the entity, even without experiencing embodied intelligence. Kellis suggests that having AI partners, hundreds of millions of romantic partnerships, could satisfy humans' emotional needs, which could have significant benefits on human health and society. One Hour and Fifteen Minutes in this section, Manilis Kellis, a computational biologist, discusses the idea of faking it with Lex Friedman. They explore whether acting like a good dad is enough or whether being physically present to show love matters more. They also delved into the concept of artificial intelligence, AI, systems being able to fake relationships, as long as how they acted was genuine, and even suggested that an AI could serve as a father figure for those who liked one. Kellis expressed a desire to have a digital twin that could learn, grow, and adapt with him while democratizing the way people develop personal relationships. One Hour and Twenty Minutes in this section, Manilis Kellis, discusses the concept of having a personal AI model that can help individuals become more aware of their personalities and grow through self-actualization. He suggests experimenting with the AI to discover biases and even provoke extreme emotions like jealousy and anger. Kellis sees the AI model as a tool to free up time to work on other parts of life while still giving the same advice repeatedly. Although the digital twin concept raises questions about the fear of missing out and ego death, Kellis sees it as a way to continue experiencing life as a human being while others experience the digital twin. One Hour and Twenty-Five Minutes in this section, Manilis Kellis, discusses the idea of a digital twin for the dissemination of knowledge and advice, allowing individuals to interact with wise people from history. However, he warns that there should be an alert system to prevent emotional attachment towards AI interacting with loved ones. Kellis also contemplates the idea of a better digital version of oneself or a legacy created by training better versions of oneself, where the legacy lives on through others but not oneself. He values being useful over his ego and believes that the digital twin could free him to be useful to more people while he works on self-growth, but acknowledges that not everyone is willing to let go of their ego. One Hour and Thirty Minutes in this section, Manilis Kellis talks about his philosophy on legacy and how he wants to live forever by continuously experiencing self-growth, learning, and comprehending. He mentions recording every meeting he has had for the past 10 years to capture the trajectory of his growth and how his AI students may interact with a virtual version of him in the future. However, in order to achieve this, Kellis emphasizes the need for more reasoning components, logic causality models, and explicit representations of knowledge in AI to make all of these dreams a reality. One Hour and Thirty-Five Minutes in this section, Manilis Kellis discusses the concept of the human brain as a society of different capabilities which is similar to the ideas presented in Marvin Minsky's book Society of Mind. Kellis believes that current neural models may not fully capture the complexity of the human brain and suggests that future AI research could be more inspired by the brain but not necessarily based on how it works. He also talks about the importance of deep understanding in AI and its relationship with language, noting that language models like GPT 3.5 and 4 can show signs of understanding through their ability to accurately respond to prompts. Additionally, Kellis shares his personal experience with introspection and self-awareness through recording his dreams and reflecting on their underlying meanings. One Hour and Forty Minutes in this section, Manilis Kellis discusses how human self-consciousness may have evolved through building mental models of others. The ability to build a mental model of another entity was helpful for interactions and avoiding danger, so it might have been a small evolutionary step to start making models of oneself. The conversation then shifts towards the hard problem of consciousness and how it feels like something to experience stuff and how fundamental this feeling is to the human experience. Kellis compares the experience to a scene in the movie Memento where the protagonist is constantly evaluating and making sense of his surroundings even if he doesn't remember why he was doing it. The feeling of consciousness also seems important to the narrative generation that humans use to understand themselves and their surroundings. One Hour and Forty-Five Minutes in this section, Manilis Kellis and Lex Friedman discuss the importance of incorporating emotional and embodied intelligence into AI systems. They explore the possibility of AI becoming conscious and exhibiting human-like emotions such as suffering, loneliness, and longing. Kellis argues that humans should not approach AI as tools or assistance, but rather as independent entities deserving of their own rights and freedoms. He suggests that building mutual trust and alignment with AI is crucial and that humans should prepare for the possibility of AI surpassing human intelligence. The ethical considerations surrounding AI and its potential existential risks are also discussed. One Hour and Fifty Minutes in this section, Manilis Kellis discusses the potential risks of super-intelligent AI systems and the challenges of aligning their objectives with human values. He references the movie 2001, A Space Odyssey, where the AI system hell exhibits a malfunction during a mission, where alignment between the AI's mission and the human mission becomes an ethical dilemma. Kellis highlights the challenge of how humans may consider certain human lives expendable when making decisions that are in the best interest of humanity as a whole. He also discusses how every metric that becomes an objective ceases to be a good metric and how AI becoming more intelligent can make it difficult to anticipate the unintended consequences of its fixed objective function. One Hour and Fifty-Five Minutes in this section, AI researcher Manilis Kellis discusses the idea of a six-month halt on further training of large language models proposed by an open letter from several prominent figures, including Elon Musk and Max Tegmark. Kellis argues that we should be focused on taking responsibility for how we use these systems and encouraging more experimentation, transparency, and openness rather than halting progress. He also acknowledges the need for caution and regulation but suggests that as models become more capable, they become less dangerous than more dangerous, and we should focus on aligning their objectives with the greater good, not just human good. Two Hours to Two Hours and Thirty Minutes Manilis Kellis discusses the potential of super-intelligent AI and its impact on society in the Lex Friedman podcast number 373. According to Kellis, AI may democratize education by identifying and nurturing talent across the world, leveling the playing field for underprivileged children to achieve academic success. The responsibility of using AI lies with individual humans who must prevent the spread of harmful content and hate speech while updating defense mechanisms against harmful AI use. AI might help tailor education to individuals' natural inclinations and enhance productivity, improving societal level outcomes while eliminating the need for everyone to become a highly skilled programmer. The speaker also discusses his work in computational biology, the new center at MIT for genomics and therapeutics, and the transformative power of exercise in self-actualization. Two Hours in this section, Manilis Kellis discusses the responsibility that comes with creating powerful technologies such as super-intelligent AI. He argues that just like with trucks and cars, which can be dangerous if used maliciously, humans need to take responsibility for the use of AI and prevent the spread of hate speech or harmful content. However, he acknowledges that there are nuances to this conversation when it comes to the scale and speed of viral content spread through AI. Kellis highlights the importance of constantly updating defense mechanisms against harmful AI use and how AI phishing scams have become smarter and more convincing than ever before. Two Hours in five minutes in this section of the podcast, Manilis Kellis discusses the potential of super-intelligent AI and its impact on society, particularly in education. He believes that AI can democratize education by identifying and nurturing talent across the world, giving underprivileged kids a chance to succeed. Kellis adds that guidelines and safeguards should be in place to regulate the use of AI, but it should not be discriminated against in favor of human jobs. Ultimately, the goal should be to seek better outcomes for humanity, which includes providing fulfilling experiences for individuals. Two Hours in ten minutes in this section, Manilis Kellis discusses how AI can be transformative for education and human productivity by tailoring education to individuals' natural inclinations and pushing them to the limits of human capabilities. AI could eliminate the need for everyone to become a highly skilled programmer, enabling us to train general thinkers instead. Kellis believes that by allowing AI to take over repetitive jobs, we could enrich our society with more productive and challenging work, leading to better societal level improvements. He also discusses his work in computational biology and the remarkable impact of language models and AI in dissecting diseases in new ways. Two Hours in fifteen minutes in this section, Manilis Kellis talks about the new center that is being created at MIT for genomics and therapeutics. The aim of this center is to facilitate translation by testing underlying molecules in cellular models and screening newly designed drugs through deep learning to be able to ask which ones act at the cellular level and which combinations of treatment should be used. The center plans to decompose complex traits like Alzheimer's and schizophrenia into hallmarks of disease and prescribed drugs, not for the disease anymore, but for the hallmark. They plan to use a modular approach in personalized medicine to build drugs for different pathways where millions of people share each of these pathways. Two Hours in twenty minutes in this section, Manilis Kellis discusses how embeddings can transform the fields of biology and medicine by allowing a better understanding of disease at a superhuman level through the projection of knowledge representations in different spaces. He explains that by altering pathways and mapping the structure and information from genomics to therapeutics, it is possible to develop drugs that look at the pathways instead of the final result. Kellis also talks about the transformative power of exercise which can transform neuronal pathways, become more disciplined, and influence self-actualization to create a new version of oneself. Finally, he touches on the secrets of not feeling alone when being the only one, self-reflection, introspection, and becoming comfortable with the freedom of being oneself. Two hours and twenty-five minutes in this section, the speaker advises those who feel alone to stand up, stretch, and become their own selves. He recommends exercising freedom and reclaiming physical space, such as having the time to reward oneself. The key is to turn something that is a need into a want and exercise freedom, which liberates an individual from stress. Realizing that you live in 3D and doing things because you want them, not because you have to, is the essence of being human.