Hello everyone,
I recently purchased an OAKD-Lite camera and I'm trying to integrate it into my ROS 2-operated robot to detect obstacles that are lower than the lidar, preventing collisions. So far, I've succeeded in incorporating the camera's point cloud into both the local and global cost maps. However, I've noticed that the navigation and path planning algorithms aren't giving as much consideration to obstacles detected by the camera compared to those detected by the 360 lidar.
Additionally, I've come across the concept of sensor fusion, and I'm interested in implementing it with the OAKD camera. Has anyone successfully implemented sensor fusion with this camera in their project, or does anyone know of any resources or guides that could help?
Thank you in advance.