The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach.
Florian FunerPublished in: Medicine, health care, and philosophy (2022)
Developments in Machine Learning (ML) have attracted attention in a wide range of healthcare fields to improve medical practice and the benefit of patients. Particularly, this should be achieved by providing more or less automated decision recommendations to the treating physician. However, some hopes placed in ML for healthcare seem to be disappointed, at least in part, by a lack of transparency or traceability. Skepticism exists primarily in the fact that the physician, as the person responsible for diagnosis, therapy, and care, has no or insufficient insight into how such recommendations are reached. The following paper aims to make understandable the specificity of the deliberative model of a physician-patient relationship that has been achieved over decades. By outlining the (social-)epistemic and inherently normative relationship between physicians and patients, I want to show how this relationship might be altered by non-traceable ML recommendations. With respect to some healthcare decisions, such changes in deliberative practice may create normatively far-reaching challenges. Therefore, in the future, a differentiation of decision-making situations in healthcare with respect to the necessary depth of insight into the process of outcome generation seems essential.
Keyphrases
- healthcare
- primary care
- machine learning
- end stage renal disease
- chronic kidney disease
- emergency department
- decision making
- ejection fraction
- newly diagnosed
- peritoneal dialysis
- deep learning
- artificial intelligence
- metabolic syndrome
- palliative care
- working memory
- big data
- high throughput
- stem cells
- insulin resistance
- bone marrow
- health information
- cell therapy
- affordable care act
- optical coherence tomography
- case report
- glycemic control