Southern California Robotics (SCR) Symposium 2023
Lab Retreat, Joshua Tree National Park, Spring 2024

USC Learning and Interactive Robot Autonomy Lab (LiraLab) develops algorithms for robot learning, safe and efficient human-robot interaction and multi-agent systems. Our mission is to equip robots, or more generally agents powered with artificial intelligence (AI), with the capabilities that will enable them to intelligently learn, adapt to, and influence the humans and other AI agents. We take a two-step approach to this problem. First, machine learning techniques that we develop enable robots to model the behaviors and goals of the other agents by leveraging different forms of information they leak or explicitly provide. Second, these robots interact with the others to achieve online adaptation by leveraging the learned behaviors and goals while making sure this adaptation is beneficial and sustainable.

Recent News

Check out our YouTube channel for latest talks and supplementary videos for our publications.
May 9, 2025: Our paper titled "Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions" got accepted to the Reinforcement Learning Conference (RLC) 2025.
Apr 29, 2025: Matthew Hong has received the Best MS Research Award of Thomas Lord Department of Computer Science. This award is given to only two students every year.
Apr 10, 2025: Our paper titled "NaVILA: Legged Robot Vision-Language-Action Model for Navigation" got accepted to the Robotics: Science and Systems (RSS) 2025 conference.
Mar 21, 2025: We are organizing a workshop on human-in-the-loop robot learning at RSS 2025. Check it out here for more details.
Jan 27, 2025: Our 2 papers got accepted to the 2025 International Conference on Robotics and Automation (ICRA):
- MILE: Model-based Intervention Learning
- Multi-Agent Inverse Q-Learning from Demonstrations
See All

Recent Talk

Erdem's seminar talk at the University of Washington on "Robot Learning with Minimal Human Feedback"