Autonomous Detection and Assessment with Moving Sensors

From u-t-autonomous.info
Jump to: navigation, search

Technical point of contact: Stephen Buerger, Sandia National Laboratories
Period of activity: 2017-2019

Overview of the Project

This project is a collaborative effort with Sandia National Labs. It will establish a new method for autonomously allocating sensor resources to identify dynamic targets in evolving, uncertain environments.

The region surrounding a high value asset constitutes an evolving and uncertain environment, and thus requires intelligent allocation of limited sensing resources to ensure overall security with high levels of confidence. The project aims to develop methods for not only generating run-time control inputs with guarantees of mission and performance specifications encoded in temporal logic but also determining minimal sensing requirements (e.g., types of sensors, motion of sensors, duration of sensing, etc.) for observability of dynamical environments, modeled as partially observable Markov decision processes (POMDPs).

As an example, our group recently studied joint sensor-controller design where the sensors are only partially defined and the goal is to synthesize weakest additional sensors, such that in the resulting POMDP, there is a small memory policy for the agent that satisfies a reachability objective. In this formulation, the goal is to synthesize the observations such that in the resulting POMDP there is a policy that satisfies the objective. Since additional sensors increase complexity, one goal is to obtain as few additional observations as possible; and since policies represent controllers another goal is to ensure that the resulting policies are not too complex. Concretely, given a partially specified POMDP (where the observations are not completely specified), the problem asks to synthesize at most n additional observations such that in the resulting POMDP there is a policy with memory size at most m to ensure that the objective is satisfied. Preliminary results provide an algorithm by encoding the problem to an instance of a satisfiability problem and illustrate the trade-off between the amount of memory of the policy and the number of additional sensors. These results also demonstrate that the number of sensors can be significantly decreased without making the policies more complex – providing comparable sensing performance with far fewer sensors than other methods in the literature.