Login / Signup

Drivers are blamed more than their automated cars when both make mistakes.

Edmond AwadSydney LevineMax Kleiman-WeinerSohan DsouzaJoshua B TenenbaumAzim ShariffJean-François BonnefonIyad Rahwan
Published in: Nature human behaviour (2019)
When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human-machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.
Keyphrases
  • deep learning
  • artificial intelligence
  • endothelial cells
  • machine learning
  • healthcare
  • induced pluripotent stem cells
  • pluripotent stem cells
  • mental health
  • public health
  • high throughput
  • patient safety
  • adverse drug