Hello,

I've been wondering if there is a way in place already that has been used to calculate the distance of a centroid based on the pointcloud data.

I have a model that takes raw images and outputs a bounding box and a centroid in a 2D space. Since the OAK- Pro has laser built in I wanted to use this capability for a more accurate position estimation.

I am working in ROS2 and currently using the ROS drivers' examples. Do you guys have anything within ROS for this purpose? or something similar?

My main concern is if I need to fusion the pointcloud with the image or depth_ai already does this for us.

@Luxonis-Adam So I already get the spatial bounding boxes from my ML model. What I am trying to accomplish is to calculate the distance to the bounding box based on the pointcloud and not the 2D image.

I understand the algorithms and Euclidian formula to calculate it but what I was wondering is if you had a node that translates the 2D coordinates system from your image into the 3D coordinate systems of the pointclud based on the stereo calibration parameters. (Fusion them together basically)

That way I can use the pointcloud coordinates instead of the image coordinates directly.

Does that makes any sense?

Currently I am getting the centroid distance from my box in pixels given the raw image, so I need like the calibration parameters of the camera so I can translate it into cm as well. That would be nice if possible too.

As mentioned, this nodecalculates the distance of the point in 3D space based on stereo image camera model.
Spatial bounding boxes coming from the camera have their coordinates in image space (px), this node translates them to m so that they can be overlaid on the pointcloud, as on the image.

I get the following error when trying to launch that node:

[ERROR] [component_container-1]: process has died [pid 67195, exit code -7, cmd '/opt/ros/humble/lib/rclcpp_components/component_container --ros-args --log-level info --ros-args -r __node:=oak_container -r __ns:=/'].