• ROS
  • Ros2sensor fusion with OAKD

Hello everyone,

I recently purchased an OAKD-Lite camera and I'm trying to integrate it into my ROS 2-operated robot to detect obstacles that are lower than the lidar, preventing collisions. So far, I've succeeded in incorporating the camera's point cloud into both the local and global cost maps. However, I've noticed that the navigation and path planning algorithms aren't giving as much consideration to obstacles detected by the camera compared to those detected by the 360 lidar.

Additionally, I've come across the concept of sensor fusion, and I'm interested in implementing it with the OAKD camera. Has anyone successfully implemented sensor fusion with this camera in their project, or does anyone know of any resources or guides that could help?

Thank you in advance.

Hi,

However, I've noticed that the navigation and path planning algorithms aren't giving as much consideration to obstacles detected by the camera compared to those detected by the 360 lidar.

If the data is available in the cost maps then the Nav stack should be able to detect them, I would reach out to authors if there is some bug that prevents that.

Additionally, I've come across the concept of sensor fusion, and I'm interested in implementing it with the OAKD camera. Has anyone successfully implemented sensor fusion with this camera in their project, or does anyone know of any resources or guides that could help?

ROS has robot_localization and fuse libraries. While I didn't work with fuse, robot_localization integrated nicely with BNO IMU.
Additionally, we are working on VIO/VSLAM nodes, they should be available in the stack in few weeks.