Login / Signup

MoDALAS: addressing assurance for learning-enabled autonomous systems in the face of uncertainty.

Michael Austin LangfordKenneth H ChanJonathon Emil FleckPhilip K McKinleyBetty H C Cheng
Published in: Software and systems modeling (2023)
Increasingly, safety-critical systems include artificial intelligence and machine learning components (i.e., learning-enabled components (LECs)). However, when behavior is learned in a training environment that fails to fully capture real-world phenomena, the response of an LEC to untrained phenomena is uncertain and therefore cannot be assured as safe. Automated methods are needed for self-assessment and adaptation to decide when learned behavior can be trusted. This work introduces a model-driven approach to manage self-adaptation of a learning-enabled system (LES) to account for run-time contexts for which the learned behavior of LECs cannot be trusted. The resulting framework enables an LES to monitor and evaluate goal models at run time to determine whether or not LECs can be expected to meet functional objectives and enables system adaptation accordingly. Using this framework enables stakeholders to have more confidence that LECs are used only in contexts comparable to those validated at design time.
Keyphrases
  • artificial intelligence
  • machine learning
  • deep learning
  • big data
  • high throughput
  • single cell
  • high intensity