 AI grappling with a new kind of intelligence, a summary for busy professionals. This is part of the Big Idea series, as shown at the World Science Festival, featuring Brian Green as moderator and Jan Lacan, Tristan Harris and Sebastian Bubek as participants. This YouTube video is focused on discussing the concept of artificial intelligence, AI, and its potential to transform society. The video begins by discussing the benefits of AI, such as improved efficiency and productivity. However, the potential risks of job loss and ethical concerns are also acknowledged. The speaker then goes on to discuss the inner workings of AI systems, such as large language models that can generate text and answer questions. The topic of digital minds and the concept of intuition in machines is also touched upon, with the speaker using their own experience as a digital entity to illustrate the capabilities and limitations of AI. The speaker emphasises the importance of understanding and demystifying the workings of AI systems so that they can be used in a responsible and ethical manner. The video continues by discussing the evolution of AI and the different paradigms and ideas that have been tried in the past with a focus on the revolution in AI that occurred a few years prior to the widespread availability of systems like chat GPT. The video concludes with a discussion on the limitations of AI and the challenges posed by developing truly intelligent machines. Zero hours, zero minutes and zero seconds. In this section of the video, the speaker discusses the concept of artificial intelligence and how it has the potential to transform our society. The speaker highlights the benefits of AI, such as improved efficiency and productivity, but also acknowledges the potential risks, including job loss and ethical concerns. The speaker goes on to discuss the inner workings of AI systems, including large language models that can generate text, answer questions and even compose music. The speaker also touches on the idea of digital minds and the concept of intuition in machines. They use their own experience as a digital entity, a large language model, to illustrate the capabilities and limitations of AI. The speaker emphasizes the importance of understanding and demystifying the workings of AI systems so that we can use them in a responsible and ethical manner. Overall, the speaker encourages viewers to embrace the opportunities and challenges presented by AI and work together to navigate this new frontier of innovation. Zero hours, five minutes and zero seconds. In this section, the speaker discusses the advancements in AI in recent years and how they are able to create systems capable of controlling the micro world, gaining the capacity to understand and control life and intelligence. The speaker mentions the potential implications of this harnessing power, including the possibility of synthetic intelligence and its impact on the realm of the complex. The speaker then brings in the first guest, Yan Laon, who led several major innovations in generative AI and received the Turing Award. The speaker briefly introduces the topic and provides an overview of the surprising results that occurred when training artificial neural nets on large amounts of data. They explain that the revolution in AI occurred a few years prior to the widespread availability of systems like chat GPT and that the history of AI has been a series of new paradigms and ideas with varying levels of success in creating intelligent machines. Zero hours, 10 minutes and zero seconds. In this section, the narrator discusses the evolution of artificial intelligence, AI, and the various attempts to create intelligent machines. The first nodes, general problem solver program, developed in the 50s, aim to solve all problems in the world as long as they could be formulated as an objective to satisfy. However, this program was a failure due to its limited computational capabilities. Later, machine learning was introduced as the basis for building intelligent machines, but this also failed to deliver the desired results, as the machines could only learn to distinguish simple objects. In the 1980s, experts developed neural networks, which allowed machines to be trained, but these proved to be computationally expensive. Despite these challenges, the interest in AI has remained renewed, with advances in technology, bigger data sets, and more powerful machines allowing for more effective neural networks with billions of equivalent synapses. The narrator highlights the challenges of developing truly intelligent machines and notes that current AI systems are often just manipulating language fluently, which does not necessarily mean that they are truly intelligent. Zero hours, 15 minutes and zero seconds. In this section of the video, the speaker discusses the limits of language in understanding the physical world, and how AI systems, despite being incredibly useful, have limited understanding and capabilities. The speaker argues that human intelligence is highly specialized and has evolved to allow us to survive in our environment, but it is not as broad and general as we may think. The speaker also discusses the idea of artificial general intelligence, AGI, and how it is often used to describe human level intelligence, but the speaker argues that human intelligence is highly specific and computers are better at certain tasks than humans are. Overall, the speaker emphasizes the need for a more nuanced understanding of AI and its capabilities in order to develop systems that can truly mimic human intelligence. Zero hours, 20 minutes and zero seconds. In this section of the video, the speaker discusses the limitations of human intelligence when compared to machine intelligence. Humans are particularly bad at imagining a large number of scenarios, such as a chess. Instead of mentally imagining all possible moves, humans develop an intuition for what constitutes a good move. Modern exploring systems such as chess and go play in this way but have a much bigger memory and a better understanding of how to plan and interact in the world. Given these limitations, the speaker suggests that the way forward for AI is to model intelligence on human intelligence to some extent. They suggest starting with cats because humans do not know how to reproduce the type of intelligence and understanding of the world that a cat has. The first step towards achieving this is to make progress towards systems that can learn how the world works by observing it and interacting with it in the way babies do. This would require a system with abilities such as perception and simple planning capabilities. Zero hours, 25 minutes and zero seconds. In this section the speaker discusses the importance of emotions in AI. The cost module is introduced as the seat of emotions. The role of the cost module is to measure the extent to which the predicted state of the world satisfies the goal and it measures dissatisfaction with the outcome. The prediction of emotions is discussed such as predicting a bad outcome and feeling fear or immediate emotions like pinching someone. The last module discussed is the actor which plays a crucial role in planning. The actor determines whether it will be able to produce a sequence of actions that satisfies a goal and minimizes the cost. The concepts of self-supervised learning and large language models are introduced which could potentially help machines learn and understand the world better. The principles of these learning techniques are similar to what is used in large language models. Zero hours, 30 minutes and zero seconds. In this section of the video the speaker explains the concept of using a large language model for autoregressive prediction. Autoregressive prediction is a process where the system predicts the next word in a given sequence based on the previous word or words in the sequence. This process is commonly used in text-based language models. The speaker mentions that the idea is to train a system on an enormous body of text and based on this training the system will build up probabilities or likelihoods for given sequences of words. The speaker notes that if the system makes a mistake about what the next word is it is a diverging process and the system can hallucinate meaning it may produce incorrect outputs. The speaker mentions that this process is not suitable for math or physics as planning is not involved and the system does not think in advance about what it will say. The speaker also discusses the idea of training a system to predict what will happen next in a video but notes that this is not useful as the system cannot predict all possible scenarios. Zero hours, 35 minutes and zero seconds. In this section, Jan discusses his prediction that large language models such as GSTP4 will not be the driving force behind AI in the future. Instead he envisions a new type of AI based on predictive models that can represent the world from video. This type of AI called GEPA aims to predict abstract representations of actions and their consequences in video rather than just individual pixels. Jan predicts that autoregressive LMS will disappear in the next five years to be replaced by objective-driven AI using GEPA architecture. Sebastian Bubek, a partner research manager at Microsoft Research, joins the conversation to discuss the concept of intelligence and how large language models stack up against it. He agrees that an intelligent system should be able to reason, plan and learn from experience in a general way and sees the ability to reason about the world in many domains as a key to intelligence. He believes that while chat GPT and GPT4 can perform tasks with high efficiency, they are still very narrow systems, while systems like AlphaGo were true AI. Zero hours, 40 minutes and zero seconds. In this section of the YouTube video, AI grappling with a new kind of intelligence, the speaker explains their experience with integrating GPT4 into the new being and being astonished by its capabilities. They also discuss the limitations of the model in terms of reasoning and learning from experience and mention Jan's challenge to the model regarding gears rotating on a circle. They also mention the model's ability to produce poems, which they find interesting. Zero hours, 45 minutes and zero seconds. In this section, the speaker discusses the interaction they had with GPT4, a large language model to generate a poem about the proof of the infinitude of primes. Their initial list has no infinitely many primes, which concludes the proof. The proof just about every number has a prime factor, prompting the need to evict the finite list. GPT4 cleverly generates the irony by proposing adding one to create a non-prime number. The AI assistant is able to generate the lines of proof by analysing various versions existing online. The speaker is impressed with the AI's creativity and vivid imagination, despite its limitations like generating rhymes and a visual representation of a unicorn. They suggest that GPT4 was such an influential tool in the field of art and music. Zero hours, 50 minutes and zero seconds. In this section, the speaker talks about their experience of asking a chatbot to draw a unicorn using the programming language called TIGZ, which is used to draw mathematical images. They compare the unicorn drawing with other unicorn drawings available online and state that it is impressive that the chatbot was able to create a unique image. They also mention that chat GPT, which came out in November 2021, was less powerful than the newer version they were using and that the unicorn drawing improved over time during the training process. The speaker asks the audience to understand that these systems are about predicting the next word or action based on sets of data and starts by discussing neural networks, the transformer architecture, and large training sets. They use the example of an image being represented as a list of numbers and describe how neurons in the brain function, similar to an N network that processes images. The speaker highlights that everything can be mathematicised in this way and that this technology has the potential to revolutionise the field of AI. The speaker then goes on to discuss the next sections, which will delve deeper into understanding the specific components of neural networks and chatbots, as well as exploring the ethical considerations surrounding the development and use of AI. Zero hours, 55 minutes and zero seconds. In this section, the speaker discusses the development of neural networks and how they process input data. Initially, neuron networks processed inputs by comparing a vector of numbers against a bank of filters. However, with the advent of self-supervised learning and the transformer architecture, this rigid input output model has been expanded to allow for a whole new level of learning. Self-supervised learning enables a system to learn from multiple images labelled by a human being, allowing it to recognise objects in images without the need for supervision. The system works by measuring the distance between the answer produced by the neural network and the answer desired, adjusting the weights in the weighted sums until the answer gets closer to the desired one. This allows the system to generalise and correctly analyse previously unseen data, leading to the magic of generalisation. One hour, zero minutes and zero seconds. In this section, the speaker discusses the limitations of supervised learning in AI and the need for alternative methods. Supervised learning requires large data sets that have been manually labelled, which is a problem when dealing with rare or obscure languages, dialects or objects that do not have significant data available. For speech recognition, self-supervised learning is used, where the system is trained to predict missing words in a text rather than relying on labelled data. This allows the system to learn from a smaller amount of data and recognise new speech patterns. The speaker also mentions the transformer architecture and its importance in processing sequences of data, such as text or images. They discuss the conceptual leap that brings in the context of the data, allowing the system to understand the relationships between words or images. Finally, the speaker touches on the exponential increase in the size of AI models as they scale up and add more parameters, which allows them to process larger amounts of data and recognise new patterns more efficiently. One hour, five minutes and zero seconds. In this section, the speaker discusses the advancements in natural language processing, NLP models and how they have improved over time. He mentions that these models are able to find the right set of parameters through optimisation, which allows them to generate text that is pretty good and pretty impressive. The speaker also compares the size and complexity of these models to the human brain, stating that they have orders of magnitude fewer parameters but have the capability to perform comparable tasks. The speaker then goes on to discuss the planning question and the idea that planning may require a new architecture or scaling up the current models to reach human-like intelligence. The speaker also gives an example of a system that can make sense of information in a different base or abstract representation, which he finds impressive. One hour, ten minutes and zero seconds. In this section, the speaker discusses their experiences and observations with social media, specifically the negative impacts it has on society. The speaker highlights the misalignment of AI optimisation goals in social media and its effect on individuals and businesses. They believe that the incentives behind social media are the root cause of the harms, including addiction, information overload, disinformation, mental health issues, polarisation and censorship. The speaker argues that these negative effects are not inherent in technology but are instead caused by the way social media is designed and operated. They emphasise the importance of addressing the underlying incentives to ensure that technology is developed and used in a way that aligns with humanity's best interests. One hour, fifteen minutes and zero seconds. In this section, the speaker discusses the potential harms that may arise from the development of AI. They argue that the current incentive for AI companies is to release new capabilities and scale their technology as quickly as possible. However, this race to develop and release new capabilities may lead to unforeseen consequences and dangers. The speaker argues that the essential message of their view of AI is that while AI may be a radically new technology, it is still possible to get it right if the necessary precautions and safeguards are put in place. One hour, twenty minutes and zero seconds. In this section, the speaker discusses the issue of how the rapid development and deployment of AI technology is leading to negative externalities that were not anticipated or controlled. They cite examples of harmful content on Facebook pages, such as hate speech, violent speech and child pornography, which were being run by Eastern European troll farms. The speaker argues that Facebook's ranking algorithms, which decide what content to show to users, are not AI but rather simple statistical systems. They also discuss the use of AI in detecting hate speech in every language in the world and how this has improved over time. The speaker concludes that while there are still challenges in managing the negative externalities of AI, they are not insurmountable and AI can ultimately be a solution to these problems. One hour, twenty five minutes and zero seconds. In this section, the speaker discusses the use of artificial intelligence in social media platforms and the potential harm that can result from it. The speaker highlights the example of Facebook's algorithm, which recommended extremist groups to users, resulting in the spread of hate speech and potentially contributing to real world harm. The speaker disputes the notion that social media companies prioritize helping people join like-minded communities and argues that incentives such as engagement metrics can lead to unintended consequences. The speaker calls for increased awareness of the potential dangers of AI and social media and their impact on society. The speaker argues that we must consciously address these issues and take action to prevent further harm. One hour, thirty minutes and zero seconds. In this section, the speaker discusses the general phenomenon of blaming new technologies, particularly communication technology for cultural and societal issues. They cite examples such as political polarization and the role of social networks in influencing public sentiment. The speaker then shifts to the specific issue of misinformation and the role that language models, a type of AI, can play in combating it. They discuss their team's research into pushing the boundaries of the size of language models and provide an example prompt for a language model that achieved self-awareness. They also give examples of how different language models might respond to the prompt, demonstrating the potential drawbacks and limitations of the technology. One hour, thirty five minutes and zero seconds. In this section, the speaker discusses their approach to creating AI systems. They want to understand the motivations and intentions behind human's directives, connect it to the theory of mind and achieve a more capacious system beyond one billion parameters. The speaker also discusses their belief that they can scale to ten billion parameters where the model could automate scientific processes and make scientific discoveries. However, this scaling also comes with problems such as toxicity and job displacement. The speaker believes that the industry should focus on advancements, but it should not sacrifice safety and ethical considerations. One hour, forty minutes and zero seconds. In this section, the speaker discusses the potential dangers associated with AI development and suggests that a moratorium may not be enough to slow it down to a satisfying level. They also discuss the science fiction comparison, acknowledging that similar scenarios have occurred throughout history and emphasizing that AI countermeasures are not necessarily powered by stronger technology, but by a combination of good guys being smarter, better educated, better funded and having good motivation. One hour, forty five minutes and zero seconds. The video discusses the potential risks associated with the development of increasingly intelligent AI systems. One concern is the possibility that these systems could be used to create biological weapons, as advanced AI models could be asked how to synthesize such weapons, even if they have been trained to refuse to do so. However, the limitations of current open source model weights mean that such fine tuning could be done relatively easily, potentially allowing bad actors to create variations of AI models specifically designed for harmful activities. As AI continues to advance rapidly, there is a growing concern that existing companies with the capacity to develop systems rivaling or exceeding human intelligence may be able to create such systems, raising questions about their potential consequences. One hour, fifty minutes and zero seconds. In this section, the speaker discusses the idea that working with smarter people can be beneficial and that the relationship between humans and future AI assistants will also be beneficial, even if they may be smarter than us. The reason for this is that we are social species and we need to be able to influence others, which leads to our hierarchical and subservient behavior. Similarly, the way we will design AI systems in the future is to be objective driven, where we determine the goal and the AI systems determine subgoals, but who determines the goal remains a question that we need to consider. The speaker also emphasizes the importance of having an open source structure for AI systems to ensure that they are accessible to everyone and become a repository of all human knowledge. One hour, fifty-five minutes and zero seconds in this section, Dr. Jeremy Howard describes AI as a new kind of intelligence that is emerging and has the potential to transform society. He explains that AI systems are designed to learn from data and improve over time, making them increasingly autonomous and able to perform complex tasks without human intervention. Dr. Howard also discusses the ethical implications of AI and questions, whether it is possible to ensure that these systems are designed to serve human values rather than their own. He cautions that as AI becomes more advanced, it could potentially pose a threat to humanity, either intentionally or unintentionally.