multisensory depth-camera experiment
-
Updated
Jul 4, 2018 - Makefile
multisensory depth-camera experiment
Real time perception stack that converts camera data into spatial understanding for robot navigation. Combines learned depth estimation, visual SLAM, and ROS2 integration with Isaac Sim and Gazebo. Emphasizes measurable accuracy, latency, and deployable architecture.
Add a description, image, and links to the rgdb topic page so that developers can more easily learn about it.
To associate your repository with the rgdb topic, visit your repo's landing page and select "manage topics."