The formalization of ai risk management and safety standards

Shabnam Ozlati, Human Factors Consulting Services, Inc.
Roman Yampolskiy, University of Louisville


Researchers have identified a number of possible risks posed to humanity by anticipated advancements in artificial intelligence (AI), but the extant literature on the topic is largely academic or theoretical in nature. Despite the likelihood that much of AI's future development will occur in industry settings, the insights generated by the AI safety research community have yet to be translated into a set of practical guidelines for working developers, project managers, and other industrial stakeholders. There are no currently established standards in place to guide the safe development of AI technologies, but the risk management approach employed in mature industries such as aerospace and medical manufacturing offers a promising model that may be adapted to AI related safety concerns. Within these industries, the safety guidelines and best practices derived from the risk management approach are developed, evaluated, formalized, and disseminated by industry specific Standards Developing Organizations (SDOs). This paper proposes a project to spur the development and adoption of formal AI risk management practices by demonstrating the approach's viability through the completion of an AI risk assessment process. The results of the proposed activities are intended to lay the initial groundwork necessary for the eventual creation of an AI SDO.