We study the mathematical science of decision-making in dynamical systems, with a primary focus on safety guarantees and multi-agent decision-making. Core concepts we investigate include nonlinear and hybrid systems theory, reachability analysis, optimal and predictive control, and game theory. The analyses we conduct using these theoretical tools form the foundation for the frameworks we develop to advance the intelligence of autonomous machines.
Machine learning provides methodologies that are broadly generalizable across diverse platforms and tasks. However, despite recent breakthroughs, current approaches still lack solid theoretical foundations, particularly in providing guarantees for safety and principles for multi-agent coordination. Our research aims to bridge this gap by developing data-driven methods with rigorous safety and coordination mechanisms.
Building on foundations in control-theoretic analysis and data-driven frameworks, we aim to advance robot intelligence with an emphasis on enabling robots to perform complex tasks that go beyond individual capabilities, tasks made possible only through well-coordinated teamwork.
Innovations in aviation autonomy are enabling new forms of operation, such as on-demand air taxi services and autonomous delivery. However, these concepts introduce significant challenges, including high-density air traffic, fluctuating demand, complex urban landscapes, and uncertain weather—each posing critical safety risks. Our research focuses on designing robust architectures for advanced air mobility, working bottom-up from vehicle-level intelligence and top-down from effective operational decision-making.
Classical control-theoretic analysis provides safety assurance when mathematical assumptions and models hold. We extend these analytical tools to settings where unmodeled effects come into play, such as disturbances, other agents with uncertain intent, and measurement errors. In addition, we study how to effectively balance safety, performance, and learning objectives.
We aim to provide modular safety assurance mechanisms that leverage trajectory data to construct and update safety constraints. These frameworks enable generalizable design methodologies across diverse systems, and building on control theory and statistical learning, deliver provable robustness against model errors, disturbances, and uncertainties.
As individual robotic capabilities advance, the next major challenge is enabling effective coordination among multiple robots and humans. We develop frameworks that support decentralized, coordinated behaviors across many agents, leveraging approaches such as multi-agent reinforcement learning (MARL) and game theory.
We envision teams of mobile manipulators coordinating to transport and assemble objects in a decentralized fashion to achieve large-scale assembly and construction tasks. This vision encompasses a wide range of challenging problems, including task allocation, long-horizon planning, collision-free coordination, and effective communication.
Future dense air operations will create tightly coupled interactions across multiple decision-making levels. For example, trajectory planning must account for vehicle-specific health conditions (e.g., battery charge level) and dynamic capabilities (e.g., maximum yaw rate), while simultaneously adapting to fluctuating user demand. We aim to develop methods that deliver system-level safety guarantees for next-generation AAM systems, addressing their multi-level objectives, constraints, and complexities.
Principal Investigator
Assistant Professor, Department of Electrical and Computer Engineering
If you want to email Jason, include "(Current UCLA student)" in the subject line. Please share your CV and transcript in the email.
If you want to email Jason, include "(Prospective Ph.D. student)" in the subject line. Please share your CV and transcript in the email.