Login / Signup

Black box algorithms in mental health apps: an ethical reflection.

Tania Manríquez RoaNikola Biller-Andorno
Published in: Bioethics (2023)
Mental health apps bring unprecedented benefits and risks, raising questions of whether the kind of algorithms they deploy and the functions they perform are relevant to assess them as tools for mental health. We focus on mental health apps based on black box algorithms, explore their forms of opacity, discuss the implications derived from their opacity, and propose how to use their outcomes in mental healthcare, self-care practices, and research. We argue that there is a relevant distinction between functions performed by algorithms in mental health apps, and we focus on the functions of analysis and generation of advice. When performing analytic functions, such as identifying patterns and making predictions concerning people's emotions, thoughts, and behaviours, black box algorithms can be better than other algorithms to provide information to identify early signs of relapse, support diagnostic processes, and improve research by generating outcomes that lead to a better understanding of mental health. However, when carrying out the function of providing mental health advice, black box algorithms have the potential to deliver unforeseen advice that may harm users. We argue that the outcomes of these apps may be trustworthy as a complementary source of information, but express caution about black box algorithms that give advice directly to users. To reap the benefits of mental health apps based on black box algorithms and avoid unintended consequences, we critically need to know whether these algorithms are fulfilling the function of providing mental health advice. This article is protected by copyright. All rights reserved.
Keyphrases
  • mental health
  • machine learning
  • deep learning
  • mental illness
  • healthcare
  • transcription factor
  • primary care
  • risk assessment
  • climate change
  • decision making