Document Type
Article
Publication Date
12-19-2014
Department
Computer Engineering and Computer Science
Abstract
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fields proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.
Original Publication Information
Kaj Sotala and Roman V Yampolskiy 2015 Phys. Scr. 90 018001
ThinkIR Citation
Sotala, Kaj and Yampolskiy, Roman V., "Responses to catastrophic AGI risk: A survey" (2014). Faculty and Staff Scholarship. 599.
https://ir.library.louisville.edu/faculty/599
DOI
10.1088/0031-8949/90/1/018001
ORCID
0000-0001-9637-1161