Southern California Robotics (SCR) Symposium 2023
Lab Retreat, Joshua Tree National Park, Spring 2024

USC Learning and Interactive Robot Autonomy Lab (LiraLab) develops algorithms for robot learning, safe and efficient human-robot interaction and multi-agent systems. Our mission is to equip robots, or more generally agents powered with artificial intelligence (AI), with the capabilities that will enable them to intelligently learn, adapt to, and influence the humans and other AI agents. We take a two-step approach to this problem. First, machine learning techniques that we develop enable robots to model the behaviors and goals of the other agents by leveraging different forms of information they leak or explicitly provide. Second, these robots interact with the others to achieve online adaptation by leveraging the learned behaviors and goals while making sure this adaptation is beneficial and sustainable.

Recent News

Check out our YouTube channel for latest talks and supplementary videos for our publications.
Sep 25, 2024: Our paper titled "DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning" got accepted to the 38th Conference on Neural Information Processing Systems (NeurIPS).
Sep 20, 2024: Our paper titled "Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree" got accepted to the Conference on Empirical Methods in Natural Language Processing (EMNLP).
Sep 4, 2024: Our 2 papers got accepted to the Conference on Robot Learning (CoRL) 2024:
- Trajectory Improvement and Reward Learning from Comparative Language Feedback
- EXTRACT: Efficient Policy Learning by Extracting Transferable Robot Skills from Offline Data
Jun 30, 2024: Our paper titled "ViSaRL: Visual Reinforcement Learning Guided by Human Saliency" got accepted to the 2024 International Conference on Intelligent Robots and Systems (IROS).
May 2, 2024: Our 2 papers got accepted to the 2024 International Conference on Machine Learning (ICML):
- RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback
- Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Brain Stimulation
See All

Recent Talk

Erdem's seminar talk at the University of Washington on "Robot Learning with Minimal Human Feedback"