Great question. So this is actually a result of the standard disparity, which is all that is currently available via OAK-D. In this standard disparity mode, the disparity search is up to 96 pixels, and the results are single, full-pixel offsets.
So what this means in terms of depth steps is that there are actually only 96 total depth steps in this mode. So that is what you are seeing.
We are planning on supporting sub-pixel disparity (see here), and actually we would have had it out already if it weren't for a bug in implementing LR-check (which improves disparity matching capability).
So with subpixel, there are 5 additional bits of information. So there are an additional 31 steps between a full-pixel disparity match.
See some details here and a visualization of what you are describing here, and reproduced below:
So with the subpixel, there will exist those 31 additional steps in-between each of the full-pixel steps you see there.
So it should be significantly more granular. Also, for visualizing the point cloud, see here for an example.
So the 31 additional steps are the
precision of the system, but not necessarily the
accuracy. And the accuracy will likely be dependent on multiple factors like lighting, calibration quality, rigidity of the mount, etc.
And sorry about the delay. Have been mostly offline because we just had a kid. :-)