Hi,

I need to perform spatial object tracking on Person. (similar to the spatial Object tracker on RGB sample)

From the MobileNetSpatialDetectionNetwork, I pass the spatialImgDetections (including x, y,z data) to the ObjectTracker .

But the Tracklets from the objectTrackers don't include any spatial data, only the regular bounding box (xmin,xmax…)

Can I get the spatial data from the ObjectTrackers or something wrong with my setup ?

(for the context, it runs within the Touch Designer platform and depthAI SDK 2.22.0.0)

thanks

    Hi XvGt
    Each tracklet should include a spatialCoordinates property which should have all - x, y and z info. Are you sure you are linking the pipeline correctly?

    Perhaps you are running the object_tracker.py instead of spatial_object_tracker.py?

    Thanks,
    Jaka

    Hi Jaka,

    That's what I thought, or it would be completely illogical .

    But the documentation only mentions ImgDetections as input to the ObjectTrackers . No mention of the SpatialImgDetections ( that includes the x,y,z).
    ObjectTrackers can process both? ((an oversight in the documentation?)

    My pipeline linking is correct.

    ColorCamera + StereoDepth >> MobilNetSpatialDetectionNetwork.SpatialImgDetections >> inputDetections.ObjectTrackers.

    I cannot directly use the sample provided by Luxonis. I'm developing within an application / development environment : Touch Designer, which has the DepthAI SDK integrated. I can only create and upload the pipeline: # Create pipeline

    I am relying on internal operators for the message/stream retrieval. This is perhaps where the problem lies, in the message formatting .

    I will check with the Touch Designer team .

    Thanks