Login / Signup

People have different expectations for their own versus others' use of AI-mediated communication tools.

Zoe A PurcellMengchen DongAnne-Marie NussbergerNils KöbisMaurice Jakesch
Published in: British journal of psychology (London, England : 1953) (2024)
Artificial intelligence (AI) can enhance human communication, for example, by improving the quality of our writing, voice or appearance. However, AI mediated communication also has risks-it may increase deception, compromise authenticity or yield widespread mistrust. As a result, both policymakers and technology firms are developing approaches to prevent and reduce potentially unacceptable uses of AI communication technologies. However, we do not yet know what people believe is acceptable or what their expectations are regarding usage. Drawing on normative psychology theories, we examine people's judgements of the acceptability of open and secret AI use, as well as people's expectations of their own and others' use. In two studies with representative samples (Study 1: N = 477; Study 2: N = 765), we find that people are less accepting of secret than open AI use in communication, but only when directly compared. Our results also suggest that people believe others will use AI communication tools more than they would themselves and that people do not expect others' use to align with their expectations of what is acceptable. While much attention has been focused on transparency measures, our results suggest that self-other differences are a central factor for understanding people's attitudes and expectations for AI-mediated communication.
Keyphrases
  • artificial intelligence
  • machine learning
  • big data
  • deep learning
  • minimally invasive
  • working memory
  • climate change