Error-correction for ai safety

Nadisha Marie Aliman, Utrecht University
Pieter Elands, Nederlandse Organisatie voor toegepast natuurwetenschappelijk onderzoek- TNO
Wolfgang Hürst, Utrecht University
Leon Kester, Nederlandse Organisatie voor toegepast natuurwetenschappelijk onderzoek- TNO
Kristinn R. Thórisson, Reykjavik University
Peter Werkhoven, Utrecht University
Roman Yampolskiy, University of Louisville
Soenke Ziesche

Abstract

The complex socio-technological debate underlying safety-critical and ethically relevant issues pertaining to AI development and deployment extends across heterogeneous research subfields and involves in part conflicting positions. In this context, it seems expedient to generate a minimalistic joint transdisciplinary basis disambiguating the references to specific subtypes of AI properties and risks for an error-correction in the transmission of ideas. In this paper, we introduce a high-level transdisciplinary system clustering of ethical distinction between antithetical clusters of Type I and Type II systems which extends a cybersecurity-oriented AI safety taxonomy with considerations from psychology. Moreover, we review relevant Type I AI risks, reflect upon possible epistemological origins of hypothetical Type II AI from a cognitive sciences perspective and discuss the related human moral perception. Strikingly, our nuanced transdisciplinary analysis yields the figurative formulation of the so-called AI safety paradox identifying AI control and value alignment as conjugate requirements in AI safety. Against this backdrop, we craft versatile multidisciplinary recommendations with ethical dimensions tailored to Type II AI safety. Overall, we suggest proactive and importantly corrective instead of prohibitive methods as common basis for both Type I and Type II AI safety.