Login / Signup

Compensating for Sensing Failures via Delegation in Human-AI Hybrid Systems.

Andrew FuchsAndrea PassarellaMarco Conti
Published in: Sensors (Basel, Switzerland) (2023)
Given the increasing prevalence of intelligent systems capable of autonomous actions or augmenting human activities, it is important to consider scenarios in which the human, autonomous system, or both can exhibit failures as a result of one of several contributing factors (e.g., perception). Failures for either humans or autonomous agents can lead to simply a reduced performance level, or a failure can lead to something as severe as injury or death. For our topic, we consider the hybrid human-AI teaming case where a managing agent is tasked with identifying when to perform a delegated assignment and whether the human or autonomous system should gain control. In this context, the manager will estimate its best action based on the likelihood of either (human, autonomous) agent's failure as a result of their sensing capabilities and possible deficiencies. We model how the environmental context can contribute to, or exacerbate, these sensing deficiencies. These contexts provide cases where the manager must learn to identify agents with capabilities that are suitable for decision-making. As such, we demonstrate how a reinforcement learning manager can correct the context-delegation association and assist the hybrid team of agents in outperforming the behavior of any agent working in isolation.
Keyphrases
  • endothelial cells
  • induced pluripotent stem cells
  • pluripotent stem cells
  • decision making
  • risk assessment
  • machine learning
  • human health
  • palliative care
  • deep learning