I want to get a quick indication of how much of my FOV from the StereoDepth node has valid depth values.

I use this as a crude way of checking lens-coved detection, combined with correlation of downscaled frames from the various sensors. Is there a way to calculate the depth info coverage (non-zero values) on-device and share this with the host, thus eliminating the need to send the full depth frames via the already limited USB 2.0 interface (using a CM4 host)?

I thought of using the script node, but due to its lack of access to libraries such as numpy, this check might be quite operationally expensive (A LOT of for loop iterations - not one of Python's strong suits)…

I just want the node to spit out a value between 0 and 1 to the host to indicate how much depth coverage it is actually seeing?

    MernokAdriaan
    I would say just send the confidence map (or disparity map) back. They should both be UINT8 so they don't take up the bandwidth.
    Alternatively - if you really need this - you can make a custom NN node, that receives a depth image and then outputs the number of non-zero pixels. luxonis/oak-examplestree/master/gen2-custom-models It's not really a NN model, but it allows you to run custom operations on the VPU.

    Thanks,
    Jaka

      jakaskerl

      Thanks for the suggestions, but I'm noticing a significant increase in latency when sending through the disparity map. I'm already sending through synced, resized (downsized) frames for all sensors (the disparity map is also synced with these so that the lens-cover detection algorithm is working with synced data) as well as a preview feed and an encoded video feed, so I'm thinking the USB 2.0 connection is a bit maxed out already.

      W.r.t. the NN node option - this is not preferred as I already have an object detection model running, and splitting the resources between two NNs will most likely reduce my FPS and likely increase the latency of detections.

      I was hoping there was an inexpensive way I could do this on the SoM, but it seems I'll have to further investigate where the bottleneck really is.