CAREER: Provably Correct Shared Control for Human-Embedded Autonomous Systems

From u-t-autonomous.info
Jump to: navigation, search

Technical point of contact: David Corman, NSF
Period of activity: 2017-2022
NSF grant page
UT announcements: ASE and ICES
Poster from the 2017 PI meeting

Overview of the Project

Establishing provable trust is one of the most pressing bottlenecks in deploying autonomous systems at scale. Embedding a human as a user, information source or decision aid into the operation of autonomous systems amplifies the difficulty. While humans offer cognitive capabilities that complement machine implementable functionalities, the impact of this synergy is contingent on the system’s ability to infer the intent, preferences and limitations of the human and the imperfections imposed by the interfaces between the human and the autonomous system.

This project targets a major gap in theory and tools for the design of human-embedded autonomous systems. Its objective is to develop languages, algorithms and demonstrations for the formal specification and automated synthesis of shared control protocols. It identifies three key needs, and addresses them in three thrusts:

caption
  • Specifications and modeling for shared control: What does it mean to be provably correct in human-embedded autonomous systems, and how can we represent correctness in formal specifications?
  • Automated synthesis of shared control protocols: How can we mathematically abstract shared control, and automatically synthesize shared control protocols from formal specifications?
  • Shared control through human-autonomy interfaces: How can we account for the limitations in expressivity, precision and bandwidth of human-autonomy interfaces, and co-design controllers and interfaces?
caption