Hello,

I was wondering where was the /oak node located within the depth_ai source code.

I want to see the way you guys set the structure of your /oak/nn/spatialdetections message. I want to feed the spatial_bb node the information of my ML model which only outputs a bounding box in a 2D space. I can provide x/y position and x/y box size but there are other specifications like pose.position x,y,z which my model does not output.

Basically I just want to adjust that specific publisher and see how you guys get the data from your boxes.

Hi @rsantan ,
For API, we have source code here: luxonis/depthai-core
Note that firmware (which the API is communicating with) is closed-source, as that's the core of our IP. So you have an object detection model which isn't standard (yolo/mobilenet) and want to use it together with SpatialDetectionNode?

Hi @erik,

That is correct, I have a object detection designed on my team for a very specific object. However, its purpose is just to identify the presence and give us a bounding box within the ROI. We then do camera/lidar fusion to get distance and the rest of the measurements.

We want to use this new approach to make everything work with just 1 module (in this case OAK camera) so I want to feed my model to the SpatialDetectionNode and get the xyz distance and spatial marker.

Hi @rsantan ,
If you plan to do camera/lidar fusion later anyways (on the host I presume), then I'd just skip SpatialDetectionNode and calculate spatial coordinates (from BoundingBox + depth map) on the host directly, demo here:
luxonis/depthai-experimentstree/master/gen2-calc-spatials-on-host

You'd just need to align depth map from lidar to color stream (of OAK), which by itself will be the main challenge.

Hello @erik

The idea is actually to get rid of the camera/lidar fusion. That is why I am trying to get this working.

Basically, finding an alternative solution that can be implemented with just 1 piece of hardware and less interfaces.