This article proposes a framework for examining the ethical and legal concerns for using artificial intelligence (AI) in post-acute and long-term care (PA-LTC). It argues that established frameworks on health, AI, and the law should be adapted to specific care contexts. For residents in PA-LTC, their social, psychological, and mobility needs should act as a gauge for examining the benefits and risks of integrating AI into their care. Using those needs as a gauge, 4 areas of particular concern are identified. First, the threat that AI poses to the autonomy of residents can undermine their core needs. Second, how discrimination and bias in algorithmic decision-making can undermine Medicare coverage for PA-LTC, causing doctors' recommendations to be ignored and denying residents the care they are entitled to. Third, privacy rules concerning data use may undermine developers' ability to train accurate AI systems, limiting their usefulness in PA-LTC contexts. Fourth, the importance of obtaining consent before AI is used and discussions about how that care should continue if there are concerns about an ongoing decline in cognition. Together, these considerations elevate existing frameworks and adapt them to the context-specific case of PA-LTC. It is hoped that future research will examine the legal implications of these matters in each of these specific cases.
Keyphrases
- artificial intelligence
- big data
- healthcare
- long term care
- machine learning
- deep learning
- affordable care act
- palliative care
- decision making
- quality improvement
- liver failure
- mental health
- health insurance
- high resolution
- intensive care unit
- respiratory failure
- risk assessment
- chronic pain
- human health
- medical students
- social media