Welcome to the Robot Vision and Learning (RVL) lab. We are part of the Computer Science department at the University of Toronto, the MCS department at UTM, and the UofT Robotics Institute. The group is led by Prof. Florian Shkurti, and consists of students with backgrounds in robotics, machine learning, computer vision, engineering, control theory, and physics. We develop methods that enable robots to perceive, reason, and act effectively and safely, particularly in dynamic environments and alongside humans. Application areas include field robotics for environmental monitoring, visual navigation for autonomous vehicles, robotic manipulation, as well as chemistry/biology lab automation.
Our paper on generating transferable adversarial scenarios for driving, using neural radiance fields, was accepted at CoRL. Congratulations to Yasasa and the co-authors.
Our paper on open-set 3D mapping, ConceptFusion, was accepted at RSS. Congratulations to Krishna Murthy and the wonderful co-authors, who also demoed the method live at CVPR.
The Acceleration Consortium won a $200M grant to accelerate materials discovery using chemistry, machine learning, lab automation and robotics. We're hiring staff scientists, postdocs, graduate, and undergraduate students.
Our field robotics paper on vision-based navigation for autonomous boats got accepted at ICRA. This is a collaboration with Tim Barfoot. Philip Huang, who led the work, graduated from his MSc and will start his PhD at CMU.
Two papers accepted at CVPR, one on continual learning of neural networks, led by Qiao Gu, and another on sparsifying vision transformers, led by Cong Wei.
Many thanks to Michal Zajac and David Helm for visiting RVL for 4 and 6 months respectively. It was a pleasure hosting them.
Our paper on video representation learning was accepted to CVPR for oral presentation.
Our paper on equivariant representations for imitation learning was accepted to ICRA.
Two papers accepted at CoRL, one on task planning in large 3D scene graphs, and one on perceiving transparent objects from RGBD sensors (oral).
Welcome to the Robot Vision and Learning (RVL) lab. We are part of the Computer Science department at the University of Toronto, the MCS department at UTM, and the UofT Robotics Institute. The group is led by Prof. Florian Shkurti, and consists of students with backgrounds in robotics, machine learning, computer vision, engineering, control theory, and physics. We develop methods that enable robots to perceive, reason, and act effectively and safely, particularly in dynamic environments and alongside humans. Application areas include field robotics for environmental monitoring, visual navigation for autonomous vehicles, robotic manipulation, as well as chemistry/biology lab automation.
Our paper on generating transferable adversarial scenarios for driving, using neural radiance fields, was accepted at CoRL. Congratulations to Yasasa and the co-authors.
Our paper on open-set 3D mapping, ConceptFusion, was accepted at RSS. Congratulations to Krishna Murthy and the wonderful co-authors, who also demoed the method live at CVPR.
The Acceleration Consortium won a $200M grant to accelerate materials discovery using chemistry, machine learning, lab automation and robotics. We're hiring staff scientists, postdocs, graduate, and undergraduate students.
Our field robotics paper on vision-based navigation for autonomous boats got accepted at ICRA. This is a collaboration with Tim Barfoot. Philip Huang, who led the work, graduated from his MSc and will start his PhD at CMU.
Two papers accepted at CVPR, one on continual learning of neural networks, led by Qiao Gu, and another on sparsifying vision transformers, led by Cong Wei.
Many thanks to Michal Zajac and David Helm for visiting RVL for 4 and 6 months respectively. It was a pleasure hosting them.
Our paper on video representation learning was accepted to CVPR for oral presentation.
Our paper on equivariant representations for imitation learning was accepted to ICRA.
Two papers accepted at CoRL, one on task planning in large 3D scene graphs, and one on perceiving transparent objects from RGBD sensors (oral).