Research
Research
Seminars
Events Calendar
Controls, Autonomy and Robotics Seminar
Design Principles for Learning-Enabled Control of Modern Robotic Systems
2:00 pm
AHG 1.112 and Zoom (Link will be sent out in the announcements)
Abstract: Optimal control is a powerful paradigm for controller design as it can be used to implicitly encode complex stabilizing behaviors using cost functions which are relatively simple to specify. On the other hand, the curse of dimensionality and the presence of non-convex optimization landscapes can make it challenging to reliably obtain stabilizing controllers for complex high-dimensional systems. Recently, sampling-based reinforcement learning approaches have enabled roboticist to obtain approximately optimal feedback controllers for high-dimensional systems even when the dynamics are unknown. However, these methods remain too unreliably for practical deployment in many application domains.
In this talk I will argue that the key to reliable optimization-based controller synthesis is obtaining a deeper understanding of how the cost functions we write down and the algorithms we design interact with the underlying feedback geometry of the control system. We begin with a dynamic programming perspective, and investigate how the geometric structure of the system places fundamental limitations on how much computation is required to compute or learn a stabilizing controller. We next investigate how to accelerate model-free reinforcement learning by embedding control Lyapunov functions — which are energy like functions for the system— into the objective. We then turn to derivative-based search algorithms and investigate how to design `good’ cost functions for model predictive control schemes, which ensure these methods stabilize the system even when gradient-based methods are used to search over a non-convex objective. Finally, I will introduce a novel data-driven policy optimization framework which embeds structural information from an approximate dynamics model and family of low-level feedback controllers into the update scheme. Throughout the talk I will emphasize how structural insights gleaned from a simple analytical model can guide our design decisions, and will discuss applications to dynamic walking, flight control, and autonomous driving.
Bio: Tyler Westenbroek is a final-year PhD Candidate in the Department of Electrical Engineering at the University of California, Berkeley advised by Shankar Sastry. His work seeks to bridge theoretical and practical barriers between geometric control, optimal control and machine learning. His work has applications in a number of application domains, including robotic locomotion, flight systems and autonomous driving. He was previously at Washington University in Saint Louis, where he graduated with top honors and received the Dan E. Nordic award for research excellence.
Sign Up for Seminar Announcements
To sign up for our weekly seminar announcements, send an email to sympa@utlists.utexas.edu with the subject line: Subscribe ase-em-seminars.