USC Learning and Interactive Robot Autonomy Lab (LiraLab) develops algorithms for robot learning, safe and efficient human-robot interaction and multi-agent systems. Our mission is to equip robots, or more generally agents powered with artificial intelligence (AI), with the capabilities that will enable them to intelligently learn, adapt to, and influence the humans and other AI agents. We take a two-step approach to this problem. First, machine learning techniques that we develop enable robots to model the behaviors and goals of the other agents by leveraging different forms of information they leak or explicitly provide. Second, these robots interact with the others to achieve online adaptation by leveraging the learned behaviors and goals while making sure this adaptation is beneficial and sustainable.
Recent NewsCheck out our YouTube channel for latest talks and supplementary videos for our publications.
|Dec 5, 2023:||Our paper titled "Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback" got accepted to the Transactions on Machine Learning Research (TMLR).|
|Sep 25, 2023:||Our paper titled "Active Preference-Based Gaussian Process Regression for Reward Learning and Optimization" got accepted to the International Journal of Robotics Research (IJRR).|
|Sep 21, 2023:||Our paper titled "RoboCLIP: One Demonstration is Enough to Learn Robot Policies" got accepted to the 37th Conference on Neural Information Processing Systems (NeurIPS).|
|May 6, 2023:||Our paper titled "ViSaRL: Visual Reinforcement Learning Guided by Human Saliency" got accepted to the Pretraining for Robotics Workshop at ICRA 2023.|
|Jan 30, 2023:||Our paper titled "Active Reward Learning from Online Preferences" got accepted to the 2023 International Conference on Robotics and Automation (ICRA).|