Login / Signup

Perceptions of artificial intelligence system's aptitude to judge morality and competence amidst the rise of Chatbots.

Manuel OliveiraJustus BrandsJudith MashudiBaptist LiefoogheRuud Hortensius
Published in: Cognitive research: principles and implications (2024)
This paper examines how humans judge the capabilities of artificial intelligence (AI) to evaluate human attributes, specifically focusing on two key dimensions of human social evaluation: morality and competence. Furthermore, it investigates the impact of exposure to advanced Large Language Models on these perceptions. In three studies (combined N = 200), we tested the hypothesis that people will find it less plausible that AI is capable of judging the morality conveyed by a behavior compared to judging its competence. Participants estimated the plausibility of AI origin for a set of written impressions of positive and negative behaviors related to morality and competence. Studies 1 and 3 supported our hypothesis that people would be more inclined to attribute AI origin to competence-related impressions compared to morality-related ones. In Study 2, we found this effect only for impressions of positive behaviors. Additional exploratory analyses clarified that the differentiation between the AI origin of competence and morality judgments persisted throughout the first half year after the public launch of popular AI chatbot (i.e., ChatGPT) and could not be explained by participants' general attitudes toward AI, or the actual source of the impressions (i.e., AI or human). These findings suggest an enduring belief that AI is less adept at assessing the morality compared to the competence of human behavior, even as AI capabilities continued to advance.
Keyphrases
  • artificial intelligence
  • machine learning
  • big data
  • deep learning
  • endothelial cells
  • healthcare
  • pluripotent stem cells
  • primary care
  • mental health
  • case control
  • clinical evaluation