Responses to catastrophic AGI risk: a survey
Abstract
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale (‘catastrophic risk’). After summarizing the arguments for why AGI may pose such a risk, we review the field's proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.
- Publication:
-
Physica Scripta
- Pub Date:
- January 2015
- DOI:
- 10.1088/0031-8949/90/1/018001
- Bibcode:
- 2015PhyS...90a8001S