Framing Effects on Judgments of Social Robots’ (Im)Moral Behaviors

Date

2021

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Frames—discursive structures that make dimensions of a situation more or less salient—are understood to influence how people understand novel technologies. As technological agents are increasingly integrated into society, it becomes important to discover how native understandings (i.e., individual frames) of social robots are associated with how they are characterized by media, technology developers, and even the agents themselves (i.e., produced frames). Moreover, these individual and produced frames may influence the ways in which people see social robots as legitimate and trustworthy agents—especially in the face of (im)moral behavior. This three-study investigation begins to address this knowledge gap by 1) identifying individually held frames for explaining an android’s (im)moral behavior, and experimentally testing how produced frames prime judgments about an android’s morally ambiguous behavior in 2) mediated representations and 3) face-to-face exposures. Results indicate that people rely on discernible ground rules to explain social robot behaviors; these frames induced only limited effects on responsibility judgments of that robot’s morally ambiguous behavior. Evidence also suggests that technophobia-induced reactance may move people to reject a produced frame in favor of a divergent individual frame.

Description

© Copyright © 2021 Banks and Koban. cc-by

Rights

Rights Availability

Keywords

framing theory, human–robot interaction, mental models, moral foundations, moral judgment, reactance, technophobia

Citation

Banks, J., & Koban, K.. 2021. Framing Effects on Judgments of Social Robots’ (Im)Moral Behaviors. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.627233

Collections