Anders discusses the Intelligence Explosion and various neighboring topics, including:
- Timescales to Strong AI: Brain Simulation
- Neuromorphic Artificial Intelligence
- Safety concerns of Bio-Inspired AI
- Difficulties in predicting the path to Strong AI
- Statistical Learning
- The Possibility of an Intelligence Explosion
- AI: Rapid Self-Improvement
- An intelligence expansion
- Reducing the likelihood of an uncontrolled intelligence explosion
- Theories of self-improving systems
- Endogenous growth models
- Why hasn't it happened yet? Why do performance increases seem to level off?
- Will society be ready for an intelligence explosion?
- The wrong analogies
- Figuring out better ways to think about the future
- The Manhatten Project
- Relatively small amounts of risk analysis before potentially risky projects of self-improving AI
- Bounding Risk
- The Precautionary Principle
Many thanks for watching!
Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_cente...
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create
Kind regards,
Adam Ford
- Science, Technology & the Future: http://scifuture.org