Impact of anthropomorphism of AI agents on privacy concerns

Responsabile: Francesco Massara

Anno 2024

Anthropomorphism at the human-computer interface is usually triggered by anthropomorphic features embedded in the design of the hardware (e.g., the human body shape of a digital assistant) or the software (e.g., the sound of a human voice) of the information technology artifact, usually mediated via visible or auditory cues (Qiu & Benbasat, 2009). According to theory of mind perception, anthropomorphism triggers the perception of nonhuman entities as attentive, conscious, and intentional (Epley et al., 2018; Waytz et al., 2010, 2014). In mind perception, also referred to as humanization or mentalization, individuals make inferences about their own mental states and those of others by imputing unobservable properties such as intentions, desires, goals, beliefs, and secondary emotions (Gray & Wegner, 2012). According to mind perception theory, a perceiver must implicitly determine the extent to which an entity has a mind and then determine that entity's state of mind. Humans can perceive not only the minds of other humans, but also the minds of non-human entities such as animals, devices, or software. When one feels that one is in the presence of another psychological entity, one has an enhanced perception that one is being observed (Puzakova et al., 2013). The belief in a supervising mind can trigger the uncomfortable feeling of being watched, which in turn reduces one's willingness to disclose sensitive information (Pukazova et al. 2013; Waytz et al. 2010). Based on mind perception theory, we propose that by implementing anthropomorphic cues to an AI agent, such as a voice, people may feel that they are in the presence of another social entity, making the technology perceived as more intrusive. The goal of this study is therefore to support the hypothesis that the presence of an anthropomorphic AI agent may increase consumer privacy concerns because the technology is perceived as more intrusive.