Why is AI likely to confront us with an overwhelmingly important ethical challenge? Is AI "just a tool", or a potentially powerful and dangerous "agent"? Does controlling AI amount to "slavery", or is it important "humane education"? Will superintelligent AIs consider humans uninteresting and leave us alone, as we do (or don't in fact do...) with non-human animals? What's the worst that could happen? Would "paperclip maximisers" lead to an inanimate future or create a lot of suffering too, much like (semi-)controlled AIs without a sufficient preference against suffering?
Slides: http://www.slideshare.net/Adriano_Man...
---
The Foundational Research Institute (FRI) explores strategies for alleviating the suffering of sentient beings in the near and far future. We publish essays and academic articles, and advise individuals and policymakers. Our current focus is on worst case scenarios and dystopian futures, such as risks of astronomical future suffering from artificial intelligence (AI). We are researching effective, sustainable and cooperative strategies to avoid dystopian outcomes. Our scope ranges from foundational questions about consciousness and ethics, such as which entities can suffer and how we should act under uncertainty, to policy implications for animal advocacy, global cooperation or AI safety.
contact@foundational-research.org