@rsantan
Then SpatialDetectionNode is the way to go.
luxonis/depthai-sharedblob/main/include/depthai-shared/datatype/RawSpatialImgDetections.hpp should be the struct for SpatialImgDetections.
Thanks,
Jaka
@rsantan
Then SpatialDetectionNode is the way to go.
luxonis/depthai-sharedblob/main/include/depthai-shared/datatype/RawSpatialImgDetections.hpp should be the struct for SpatialImgDetections.
Thanks,
Jaka
Hi @rsantan ,
If you plan to do camera/lidar fusion later anyways (on the host I presume), then I'd just skip SpatialDetectionNode and calculate spatial coordinates (from BoundingBox + depth map) on the host directly, demo here:
luxonis/depthai-experimentstree/master/gen2-calc-spatials-on-host
You'd just need to align depth map from lidar to color stream (of OAK), which by itself will be the main challenge.
Hi @rsantan ,
For API, we have source code here: luxonis/depthai-core
Note that firmware (which the API is communicating with) is closed-source, as that's the core of our IP. So you have an object detection model which isn't standard (yolo/mobilenet) and want to use it together with SpatialDetectionNode?
Hi @rsantan
Other than to check the ETH bandwidth (1000 with CAT5E cable) I'm not sure, it's likely a ROS specific issue. Maybe there are flags to lower the latency. cc @Luxonis-Adam
Thanks,
Jaka
Hi @rsantan, for now the updates are available on this branch, they will be later merged into the main Humble
and other branches, regarding galactic, we do not support it anymore as it has reached EOL and updates won't be published by the ROS organization, so we recommend using either Humble or Iron versions.