Login / Signup

How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners.

Eva Weber-Guskar
Published in: Ethics and information technology (2021)
Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human emotions and, sometimes, to simulate emotions (EAI). The introduction of such systems in our lives is met with some criticism. There is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain emotions towards it, and about getting involved in a kind of affective relationship with it. In this paper, I want to tackle these worries by focusing on the last aspect: in what sense could it be problematic or even wrong to establish an emotional relationship with EAI-systems? I want to show that the justifications for the widespread intuition concerning the problems are not as strong as they seem at first sight. To do so, I discuss three arguments: the argument from self-deception, the argument from lack of mutuality, and the argument from moral negligence.
Keyphrases
  • artificial intelligence
  • deep learning
  • machine learning
  • big data
  • bipolar disorder
  • endothelial cells
  • mental health
  • tyrosine kinase
  • pluripotent stem cells
  • hepatitis c virus
  • body composition