Document Type

Conference Proceeding

Publication Date

2-2016

Department

Computer Engineering and Computer Science

Abstract

In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/meta-rules which might lead to malevolent behavior in Als (Özkural 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Turchin July 10 2015a, Turchin July 10, 2015b).

Comments

In proceedings of 2nd International Workshop on AI, Ethics and Society (AIEthicsSociety2016). Pages 143-148. Phoenix, Arizona, USA. February 12-13th, 2016.

ORCID

0000-0001-9637-1161

Share

COinS