216743

Levels of trust in the context of machine ethics

Herman T. Tavani

pp. 75-90

Are trust relationships involving humans and artificial agents (AAs) possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani (Ethics and Information Technology 13(1):39–51, 2011), I argue that the 'short answer" to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents (HAs) and AAs. In defending this view, I show how James Moor's model for distinguishing four levels of ethical agents in the context of machine ethics (Moor, IEEE Intelligent Systems 21(4):18–21, 2006) can help us to develop a framework that differentiates four (loosely corresponding) levels of trust. Via a series of hypothetical scenarios, I illustrate each level of trust involved in HA–AA relationships. Finally, I argue that these levels of trust reflect three key factors or variables: (i) the level of autonomy of the individual AAs involved, (ii) the degree of risk/vulnerability on the part of the HAs who place their trust in the AAs, and (iii) the kind of interactions (direct vs. indirect) that occur between the HAs and AAs in the trust environments.

Publication details

DOI: 10.1007/s13347-014-0165-8

Full citation:

Tavani, H. T. (2015). Levels of trust in the context of machine ethics. Philosophy & Technology 28 (1), pp. 75-90.

This document is unfortunately not available for download at the moment.