USC Learning and Interactive Robot Autonomy Lab (LiraLab) develops algorithms for robot learning, safe and efficient human-robot interaction and multi-agent systems. Our mission is to equip robots, or more generally agents powered with artificial intelligence (AI), with the capabilities that will enable them to intelligently learn, adapt to, and influence the humans and other AI agents. We take a two-step approach to this problem. First, machine learning techniques that we develop enable robots to model the behaviors and goals of the other agents by leveraging different forms of information they leak or explicitly provide. Second, these robots interact with the others to achieve online adaptation by leveraging the learned behaviors and goals while making sure this adaptation is beneficial and sustainable.
Recent News
Check out our YouTube channel for latest talks and supplementary videos for our publications.| Apr 26, 2026: | Our paper titled "Robometer: Scaling General-Purpose Robotic Reward Models via Trajectory Comparisons" got accepted to the Robotics: Science and Systems (RSS) 2026 conference. |
| Mar 30, 2026: | Our paper titled "Vibrotactile Preference Learning: Uncertainty-Aware Preference Learning for Personalized Vibration Feedback" got accepted to the 34th ACM Conference on User Modeling, Adaptation and Personalization (UMAP) 2026. |
| Feb 24, 2026: | Our patent application titled "Reinforcement learning based control of imitative policies for autonomous driving" has been granted with the patent number US 12,561,602. |
| Feb 21, 2026: | Our paper titled "ORIC: Benchmarking Object Recognition under Contextual Incongruity in Large Vision-Language Models" got accepted to IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2026. |
| Feb 20, 2026: | Erdem has been awarded the 2026 ONR Young Investigator Program (YIP) Award! |
| See All |

