Hi all,
I was working with camera OAK-D on recording videos and disparity frames, with the objective to use the recorded videos on spatial object detection not only limited to depthAI but on other frameworks as well. For that, I need to compute the depth from disparity on CPU (since storing directly the depth is storage expensive), without using depthAI hardware. I'm following the formulation described in stereo-depth-distance guide. In code, my procedure is:
baseline = 75
hfov = 71.9
width = 800 # camera resolution is 800P
focal_length = width * 0.5 / np.tan(hfov * 0.5 * np.pi / 180)
depth = focal_length * baseline / disparity
and I get the disparity as:
qIn = device.getOutputQueue("disparity")
dispFrames = qIn.tryGetAll()
for disp_frame in dispFrames:
disparity = disp_frame.getFrame()
// compute the depth
However, the values for depth I get are very different to the depth values obtained through the depthAI hardware. Is there something wrong with my calculations? did I miss something? Any guide would be appreciate.
Thanks.