Autonomous mobile robots need to operate in a diverse range of environments that present large challenges for the current state of the art in perception. For example, autonomous road vehicles need to operate in darkness and fog and search-and-rescue robots need to operate in the presence of thick smoke. These conditions, referred to as Visually Degraded Environments (VDEs), degrade the performance of cameras and LiDAR and this degradation can easily lead to higher-level failures. Further, in many cases robots require perception in VDEs far exceeding that of humans. Autonomous road vehicles, for example, must operate in VDEs, but a perception failure could endanger human lives. If techniques for robust perception in VDEs are not developed, the usefulness of autonomous mobile robots will be severely limited.
This workshop aims to highlight new developments in the field of robust perception and state estimation. We will bring together experts in the field to share their work on novel sensors, new algorithms, and full perception systems. Our goal is to bring more attention to this important area, and encourage sharing and collaboration between robust, resilient perception and a diverse range of related research areas including computer vision, robust AI, and field robotics.
The workshop took place on May 31st, 2021.
Davide Scaramuzza - Robust Perception For Cars And Drones
CJ Taylor - UPSLAM : Union of Panoramas SLAM
Sebastian Scherer - Robust Navigation with Visual and Thermal Sensors in Degraded Visual Environments
Jeanette Bohg - Detect, Reject, Correct: Cross-modal Compensation of Corrupted Sensors
Claire Tomlin - Learning-Based Waypoint Navigation: a Viewpoint on Perception, Planning, and Control
Larry Matthies - Terrain-relative navigation for guided descent on Titan
Sanjiv Singh - Faster, Lighter, More Reliable: Commonplace autonomous systems need all three
Tim Barfoot - Dark, Damp, and Dynamic: Recent Progress on Robotic Localization in Challenging Environments
Self-Improving Semantic Perception on a Construction Robot - Hermann Blum, Francesco Milano, René Zurbrügg, Roland Siegwart, Cesar Cadena, and Abel Gawel
Calibrating LiDAR and Camera using Semantic Mutual information - Peng Jiang, Philip Osteen, and Srikanth Saripalli
MIXER: A Principled Framework for Multimodal, Multiway Data Association - Parker Lusk, Ronak Roy, Kaveh Fathian, and Jonathan How
Autonomous Quadrotor Flight despite Rotor Failure with Onboard Vision Sensors: Frames vs. Events - Sihao Sun, Giovanni Cioffi, and Davide Scaramuzza
On the Design of Robust and Reliable Vision Front-Ends for Visually Degraded Environments - Vikrant Shah, Jagatpreet Singh, Pushyami Kaveti, and Hanumant Singh
Event-based Monocular Depth Prediction in Night Driving - Javier Hidalgo-Carrió, Daniel Gehrig, and Davide Scaramuzza
Redesigning SLAM for Arbitrary Multi-Camera Systems - Juichung Kuo, Manasi Muglikar, Zichao Zhang, and Davide Scaramuzza
High-Speed Drone Flight with On-Board Sensing and Computing - Antonio Loquercio, Elia Kaufmann, Yunlong Song, and Davide Scaramuzza
Inertial Learning for Improved Dynamic Legged Robot State Estimation - Russell Buchanan, Marco Camurri, and Maurice Fallon
Please send any questions to Andrew Kramer at email@example.com. Please include “Robust Perception ICRA 2021 Workshop” in the subject of the email.